Jamris 2013 vol 7 no 3

64
pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE) VOLUME 7 N° 3 2013 www.jamris.org

description

www.jamris.org

Transcript of Jamris 2013 vol 7 no 3

Page 1: Jamris 2013 vol 7 no 3

pISSN 1897-8649 (PRINT) / eISSN 2080-2145 (ONLINE)

VOLUME 7 N° 3 2013 www.jamris.org

Page 2: Jamris 2013 vol 7 no 3

Articles 1

Journal of automation, mobile robotics & intelligent systems

Publisher:Industrial Research Institute for Automation and Measurements PIAP

Editor-in-Chief

Janusz Kacprzyk (Systems Research Institute, Polish Academy of Sciences; PIAP, Poland)

Co-Editors:

Oscar Castillo (Tijuana Institute of Technology, Mexico)

Dimitar Filev (Research & Advanced Engineering, Ford Motor Company, USA)

Kaoru Hirota (Interdisciplinary Graduate School of Science and Engineering,

Tokyo Institute of Technology, Japan)

Witold Pedrycz (ECERF, University of Alberta, Canada)

Roman Szewczyk (PIAP, Warsaw University of Technology, Poland)

Executive Editor:

Anna Ladan [email protected]

Editorial Board:Chairman: Janusz Kacprzyk (Polish Academy of Sciences; PIAP, Poland)

Plamen Angelov (Lancaster University, UK)

Zenn Bien (Korea Advanced Institute of Science and Technology, Korea)

Adam Borkowski (Polish Academy of Sciences, Poland)

Wolfgang Borutzky (Fachhochschule Bonn-Rhein-Sieg, Germany)

Chin Chen Chang (Feng Chia University, Taiwan)

Jorge Manuel Miranda Dias (University of Coimbra, Portugal)

Bogdan Gabryœ (Bournemouth University, UK)

Jan Jab³kowski (PIAP, Poland)

Stanisław Kaczanowski (PIAP, Poland)

Tadeusz Kaczorek (Warsaw University of Technology, Poland)

Marian P. Kazmierkowski (Warsaw University of Technology, Poland)

Józef Korbicz (University of Zielona Góra, Poland)

Krzysztof Kozłowski (Poznan University of Technology, Poland)

Eckart Kramer (Fachhochschule Eberswalde, Germany)

Piotr Kulczycki (Cracow University of Technology, Poland)

Andrew Kusiak (University of Iowa, USA)

Mark Last (Ben–Gurion University of the Negev, Israel)

Patricia Melin (Tijuana Institute of Technology, Mexico)

Tadeusz Missala (PIAP, Poland)

Fazel Naghdy (University of Wollongong, Australia)

Zbigniew Nahorski (Polish Academy of Science, Poland)

Antoni Niederliñski (Silesian University of Technology, Poland)

Witold Pedrycz (University of Alberta, Canada)

Duc Truong Pham (Cardiff University, UK)

Lech Polkowski (Polish-Japanese Institute of Information Technology,

Poland)

Alain Pruski (University of Metz, France)

Leszek Rutkowski (Czêstochowa University of Technology, Poland)

Klaus Schilling (Julius-Maximilians-University Würzburg, Germany)

Ryszard Tadeusiewicz (AGH University of Science and Technology

in Cracow, Poland)

Stanisław Tarasiewicz (University of Laval, Canada)

Piotr Tatjewski (Warsaw University of Technology, Poland)

Władysław Torbicz (Polish Academy of Sciences, Poland)

Leszek Trybus (Rzeszów University of Technology, Poland)

René Wamkeue (University of Québec, Canada)

Janusz Zalewski (Florida Gulf Coast University, USA)

Marek Zaremba (University of Québec, Canada)

Associate Editors:

Mariusz Andrzejczak (BUMAR, Poland)

Katarzyna Rzeplinska-Rykała (PIAP, Poland)

Statistical Editor:

Małgorzata Kaliczynska (PIAP, Poland)

Webmaster:

Tomasz Kobyliñski [email protected]

Editorial Office:

Industrial Research Institute for Automation

and Measurements PIAP

Al. Jerozolimskie 202, 02-486 Warsaw, POLAND

Tel. +48-22-8740109, [email protected]

Copyright and reprint permissions

Executive Editor

If in doubt about the proper edition of contributions, please contact the Executive Editor. Articles are reviewed, excluding advertisements

and descriptions of products.

All rights reserved ©

Page 3: Jamris 2013 vol 7 no 3

Articles2

The Application of Cognitive Coputer Graphics to Economic Data ExplorationOlga Pilipczuk, Dmitri Eidenzon

A Skeleton Rule-based Expert System of New GenerationWładysław Brzozowski

Map Construction and Localization Using Lego Mindstorms NXTJasmeet Singh, Punam Bedi

Synthesis of Quasi-optimal Motion Crane’s Control in the Form o FeedbackVjacheslav Lovejnkin, Yuriy Romasevich, Yuriy Chovnuk

Algorithms for Packing Soft Blocks of VLSI SystemMarcin Iwaniec, Adam Janiak

A Comparison of Buffer Sizing Techniques in the Critical Chain Method. Case studyAnna Slusarczyk, Dorota Kuchta, Philip Verlhust, Willem Huyghe, Koen Lauryssen, Torrino Debal

Rough Surface Description System in 2,5D Map for Mobile Robot NavigationAdam Niewola

JOURNAL Of AUTOMATION, MOBILE ROBOTICS & INTELLIGENT SYSTEMS

VOLUME 7, N° 3, 2013

3

12

18

46

39

52

5

CONTENTS

Page 4: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 3

The Application of Cognitive Computer Graphics to Economic Data Exploration

Olga Pilipczuk, Dmitri Eidenzon

Submitted: 31st July 2012; accepted: 8th April 2013

Abstract: This paper describes the concept of cognitive graphics and its importance for the analysis and visualization of economic data. A brief description of cognitive graphics systems is presented. Particular attention is given to the NovoSpark Visualizer system that not only enables data exploration but also allows color transformation of nu-merical data. An experiment is conducted to demonstrate how the application of color spectrum results in activation of cognitive thinking, and thus allows the researcher to quickly find a solution to the problem.

Keywords: cognitive computer graphics, color spectrum, NovoSpark Visualizer

1. IntroductionPeople’s desire to communicate their ideas and

concepts, vision of the world and knowledge in the form of a graphic image is observed from the begin-ning of civilization. People exchange information by visual means, both abstract and information with di-rect basis in reality. Data visualization is based on the unique ability of a person to process huge amounts of visual information in a short period of time. Now-adays, visualization helps in conducting scientific research, is routinely used in technical disciplines, medicine, economics and management, in art, that is, whenever we are struggling with problem solving and decision making.

Ensuring adequacy of the decisions is possible only on condition of proper interpretation of data analysis results, which usually is impossible without data visualization and analysis, especially when we try to process a large number of parameters simul-taneously. In this case, visual analysis becomes more complicated because of the increased dimensional-ity of the input data. Additional graphics objects and verbal and numerical information describing these objects are emerged in the image and, therefore, in-crease the number of tries to interpret the results of the analysis. So, the need for multidimensional data visualization as a single image arises. This image al-lows for easy interpretation of the results. Combina-tion of computer graphics and cognitive approach provides a way to solve the problems quickly and in-crease efficiency of the tasks.

The main purpose of the article is to present the cognitive approach for visualization of multidimen-sional data and to show the effects of its application. An example of problem solving with the NovoSpark Visualiser tool is provided.

2. The Concept and Characteristics of Cognitive Graphics The concept of cognitive graphics was first pre-

sented by the Russian scientist A. Zenkin in the paper describing the research on the properties of various concepts in the number theory. Cognitive graphics is a set of means and methods of graphic presentation of the problem, which allows you to see the solution right away or at least get a hint [Zenkin, 1991]. Graph-ics is the direction of cognitive machine graphics that combines visualization mapped on a computer screen with the cognitive processes taking place in the hu-man brain. Cognitive graphics allows the user to see a new direction or approach to solving a problem that was not visible when using traditional tools of data visualization. Verbal or symbolic knowledge about an object, phenomenon or situation is rarely able to provide such a clear perception and understanding as visual perception, for which a right hemisphere of the human brain is responsible.

There are three main cognitive functions of cogni-tive graphics [Zenkin, 1991]: 1. Creation of the models of knowledge represen-

tation, which would be able to provide uniform methods of objects presentation, typical for logi-cal thinking and images created with imaginary thinking;

2. Visualization of knowledge, we have not yet found text descriptions;

3. Finding ways to move from the observed images to formulation of hypotheses about the mecha-nisms and processes that are hidden in the dy-namics of the observed patterns.

So, in other words, the key functions of cogni-tive systems are illustrative and cognitive function. The first one allows you to build various types of diagrams, charts, histograms, charts, while the other helps the user to exploit the natural ability to think in terms of spatial images.

Visualization of multidimensional data in a single image not only allows performing qualitative analysis, but also using the results of visualization to confirm or refute hypotheses. Significant effects can give the ap-plication of cognitive approach for visualization of mul-tidimensional data. At present, there are no standard cognitive principles of information visualization, but there is an understanding that graphic images contain a compressed available for the user information to help in solving the problem. Each image accumulates the in-formation in the life cycle of an object, and then an ex-pert interprets it using the experience and knowledge.

Page 5: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles4

In the recent years a number of systems with cog-nitive approach have been created to explore and visualize data. These systems have found their ap-plications in medicine, economics and management, statistics, construction, architecture, mathematics, physics, microelectronics, etc. [Novoselov, 2008; Rak-czeeva, 1993]. These systems support the problems salvation connected with graphs, with three-dimen-sional map of the terrain, with structures of complex technology, with constructions of various surfaces and even parts of the human body.

Interfaces with cognitive graphics can also be used for educational purposes, such as learning and knowl-edge management. The effectiveness of using such a technology is based on involvement of the organs of human perception in learning and control processes. For example, in the process of astronauts training the graphical tools for creation of 3D models and three-dimensional worlds are used.

When designing systems using cognitive computer graphics, intelligent tools are needed that perform the key functions: cognitive and illustration. Such systems should fit into different areas of application and at the same time should be sufficiently mobile in different operating systems. Moreover, such a tool should create and display three-dimensional graphical objects with orientation for modern software environment, to add any information to graphic objects, to provide the abili-ty to use graphic materials from other graphics systems.

3. The Visible Color Spectrum and its Application in Computer Graphics

As previously stated, the most important element of cognitive graphics systems is adequate representa-tion of the data. The color spectrum is very often used in cognitive graphic systems for visualization of mul-tidimensional data. A human being can see and un-derstand all colors in a spectrum in a natural intuitive way. The spectrum of colors is part of the electromag-netic spectrum that can be detected by the human eye. The spectrum of colors consists of the following simple colors: violet, indigo, blue, cyan, green, yellow, orange and red. A simple color is a visual impression caused by an electromagnetic wave with a specified length between 380 nm and 740 nanometers.

Fig. 1. Color spectrum

Each color has specific meaning. Cool colors are usually not reassuring. Violet is a combination of blue and red. Red brings dynamism and activity to dis-tanced blue. Light blue often has an effect of calmness and relaxation. Green has the same calming effect as well as blue, but it also contains some of the energy of yellow. Generally, the green has the balancing and har-monizing impact. Warm colors convey emotions from simple optimism or hope to anger. These are typically energetic and positive colors. Yellow is the first warm color in the spectrum. It carries the warmth, joy, and

at the same time it is gentle. The orange color usually creates excitement. Red has long been recognized as the color of strong emotions, and even aggression.

Because of the intuitive color perception by the human, in cognitive graphics systems suitable colors reflect the values of various parameters. So, purple indicates minimal value of the selected parameter, red indicates the maximum value of the parameter, and green – the average. The similar scheme has been used in a variety of computer graphics systems, and among others the NovoSpark Visualizer system pre-sented in the next part of this article.

4. NovoSpark Visualizer System – Conception and Methodology

The necessity of displaying three or more relat-ed parameters/variables using traditional methods of data visualization (charts, diagrams, icons, and others) cannot cope with the requirements for map-ping and, most importantly, subsequent interpreta-tion of resulting images [Chernoff, 1973; Bajaj, 1999; Cleveland, 1993; Friendly, 2008; Friedman, 2008; D’Ocagne, 1885; Inselberg, 1985; Inselberg, 2009]. The main disadvantages and difficulties of traditional methods are:

– Determination of dimensionality;– Different scales of parameters/variables;– Detection of data trends;– Ambiguity in mapping of dynamic changes in pa-

rameter/variable values.The maximum dimensionality of images is limited

by dimensionality of the data presentation tools, such as paper or computer screen. Therefore dimensional-ity of an image representing static state cannot exceed three, and, in the case of dynamic data (when one of the measurements specifies a time interval) maxi-mum number of dimensions that can be shown on an image is limited to two.

Visualization of the integral system state must meet the following requirements:

– Usage of all state parameter values to create an image without loss of information;

– Dimensionality of a state image must not exceed two dimensions;

– Produce a single static image for state dynamics.In mathematics transformation of a set of discrete

values in a continuous two-dimensional space [Fou-rier, 1808] has been known for years. The first at-tempt to apply such transformation for visualization of multidimensional data has been described in [An-drews, 1972] and was named “Andrews Curves”. Later, independently of this research Volovodenko et al. [Vo-lovodenko, 1991] proposed a new approach that al-lowed visualization of both static and dynamic data on one integral image. The proposed methodology has been adapted to use modern computer technolo-gies that resulted in implementation of the NovoSpark Visualizer tool [Novospark 2007, Novospark 2009].

The method of visualizing multidimensional ob-jects and processes is based on two isometric spaces, whereby objects from one space (the data space) are called the originals, while objects from the other space play the role of images.

nanometers

violet blue green yellow red

380 740

Page 6: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 5

4.1. The Image of a Multidimensional Observation

Let us select a point-observation A in N-dimension-al affine point-vector space NR of the originals

)...,( 110 −= NaaaA and form linear combination )(tf A of functions ∞

0)( tPi by using the following rule:

=

=1

1

)()(N

iiiA tPatf (1)

Where )(tPi are Legendre polynomials, i.e. or-thogonal polynomials with weight 1 defined on the segment t = [0, 1].

Relationship )(tfA A↔ is a one-to-one corre-spondence and, hence, changes in parameters/coor-dinates ia imply the corresponding changes in the function )(tf A .

Figure 2 shows images of the multidimensional observations A = (0,0,0,0,0,0,1) and B = (0,0,0,0,0,0,-1)in the two-dimensional coordinate system f, t.

Since image of a point in a multidimensional space is a function-curve, it can be “painted” in accordance with the function values. Applying a color palette em-phasizes similarities and/or differences in images and allows viewing these images in the coordinate system z, t, where one can render an arbitrary im-age “width” along the z-axis. Such representation of an image )(tf A as a surface is called a “spectrum” of the multidimensional point-observation. Figure 3a shows colored images of the two multidimensional observations A and B in the coordinate system f, t

and the corresponding spectrums, 3bd in the coordin-ate system z, t, where vertical lines on the spectrums correspond to arbitrarily selected spectrum sur-face isolines.

4.2. The Image of a Multidimensional ProcessA multidimensional process can be considered as

a set of sequential transitions from one multidimen-sional observation (state) to another; or as a set of multidimensional segments sequentially connecting system states – points in the multidimensional space of the originals.

If we connect origin 0 in NR to the point A by the radius vector Aρ , then projections Aρ will co-incide with the coordinates of the point A, i.e.

)...,( 110 −= NA aaavectorρ . The vector Aρ is called the finite representing the vector of polynomial (fA(t). Here one can see the connection with the method of representing vectors [Volovodenko, Koryakina, 1980].

Selecting another point B in multidimen-sional space NR produces the radius vector:

)...,( 110 −= NB bbbvectorρ . Now we can describe a multidimensional segment AB with fixed vertices, where one can define the radius vector for any point X belonging to the segment AB:

)...,( 110 −= NX xxxvectorp , which satisfies the following equation:

))()...(),((

)(

110 zxzxzxvector

z

N

XABAX

−===−+= ρρρρρ , (2)

where ]1,0[∈z . Similarly to (*) we can define an im-age of the point X at a position z:

=

==↔=1

0

),()()()()(N

iXiiX ztftPzxtfzXX , (3)

where )()( iiii abzazx −+= .

The set of the equations above justifies the transi-tion from the two-dimensional images )(tf X in the coordinate system f, t to the three-dimensional im-ages ),( ztf X in the coordinate system f, t, z.

Due to the linearity of the transformation (1), the multidimensional segment AB corresponds to a smooth continuously changing surface F, as shown in the Fig. 4.

Fig. 4. The image of a multidimensional segment AB with the point ab=(0,0,0,0,0,0,0) in the coordinate system f, t, z

Let us look at the above surface F in more de-tails. The cross-section z = 0 produces the function

)()0,( tftf A= . Likewise, the cross-section z=1 gen-erates the function )()1,( tftf B= . The intermedi-ate surface cross-sections correspond to the inter-

Fig. 2. Images of multidimensional observations A and B

)(tf A

)(tf B (а )

)(tf A

)(tf B (b)

Fig. 3. Images of multidimensional observations A and B: (a) in the coordinate system f, t and (b) in the coor-dinate system z, t

Page 7: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles6

mediate segment points and produce the functions),()( *

* ztftf XX= for z = z*.

Thus, there is a one-to-one correspondence be-tween points of the segment AB in the N-dimensional space of originals NR and cross-sections of the sur-face F in the 3-dimensional space of images R3.

Since the domain of the arguments f, t and z is strictly defined, it is possible to precisely scale the axes of the coordinate system f, t, z, which is re-quired for displaying the images on a graphical device (e.g. computer monitor).

Although the image of a multidimensional process is defined in the coordinate system f, t, z, identifi-cation of process patterns can be easily done in the coordinate system t, z using a color palette on the process state images as shown below in Fig 5.

Fig. 5. A fragment of the image of a 12-channel ECG in the coordinate system t, z

Using the coordinate system f, t, z as a 3-dimen-sional image space allows defining and displaying the value of a “distance” between multidimensional objects from the original space. That is, two-dimen-sional images (in the coordinate system f, t) of mul-tidimensional objects are rendered along the z-axis (in the coordinate system f, t, z) according to their distance from the selected origin.

Thus, the z-axis in the coordinate space of images f, t, z is called the “distance-axis”.

Any known distance metric can be used for the z-axis values, like Euclidean distance, Mahalanobis distance, time interval (to display a process) and/or others.

Practical application of abovementioned method-ology will be examined by the example shown in the following part of article.

5. An Example of Using NovoSpark Visualizer System in Decision-makingIn order to prove that cognitive graphics system

can help in decision making, authors provide an ex-ample of health and fitness center problem solving. Promotion and marketing department is given the task to analyze the demand for various center servic-es and create a new price offer taking into account the results of analysis. The number of customer visits at the specified time was calculated. During the research large volumes of the source data were collected. To facilitate the analysis the paper presents only the av-erage number of visits grouped by days of week. The obtained data is imported into the NovoSpark® Visu-alizer system. Table 1 presents an average number of customers using the pool facilities in 2011 grouped by all days of week.

Table 1. Pool customers’ average attendance in 2

Time intervals M T W T F S S

8.00 – 9.00 8.3 12.2 10.0 13.3 14.5 22.5 10.1

9.00 – 10.00 8.6 14.2 15.4 17.1 16.3 28.9 33.5

10.00 – 11.00 22.1 26.3 20.5 25.5 21.1 35.4 36.3

11.00 – 12.00 25.0 26.9 22.2 29.5 35.5 60.5 51.2

12.00 – 13.00 21.3 23.5 25.9 31.1 34.7 72.3 55.5

13.00 – 14.00 25.8 19.0 30.2 27.4 37.4 68.4 60.4

14.00 – 15.00 25.6 18.4 31.1 29.5 39.4 73.5 57.7

15.00 – 16.00 33.4 34.2 32.7 35.3 42.6 66.6 55.1

16.00 – 17.00 34.4 45.1 38.5 46.5 65.2 63.7 45.5

17.00 – 18.00 42.2 51.8 43.3 56.6 64.7 41.9 42.4

18.00 – 19.00 50.1 55.7 48.6 58.2 60.5 34.9 25.3

19.00 – 20.00 38.9 40.1 46.2 42 37.3 45.1 18.1

20.00 – 21.00 27.5 26.0 32.2 40.2 24.2 22.9 17.2

21.00 – 22.00 12.6 18.9 16.1 25.7 10.3 12.8 15.8

22.00 – 23.00 5.3 8.9 7.7 10.3 2.1 12.2 8

Firstly, the traditional visualization has been made. Then, on the basis of visualization the attendance anal-ysis was made. After that, authors provide the interpre-tation using color spectrum and compare the results.

a)

b)

Fig. 6. Visualization of the observation “Swimming Pool – Monday” a) traditional interpretation b) spectral in-terpretation

Figure 6a shows the data on average number of Monday visits. The x-axis shows the hourly intervals, the y-axis – number of clients. Although the tradition-al graph shows quite clearly the dynamics, the spec-tral interpretation (Fig. 6b) provides more informa-tion about the function value in the specified period of time. The curve variable shape interferes in making comparisons of values.

Page 8: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 7

The problem deepens when one image maps the functions for the entire week (Fig. 7). The x-axis shows the hourly intervals, the y-axis – number of clients. The curves comparison is a very complicated and does not indicate directly the answer to the ques-tion on which days and hours there is the greatest demand and in which – the least. The only thing you can easily see that in the morning and evening the de-mand is smaller than in the afternoon.

In the next step the data from Table 1 was stan-dardized and three-dimensional visualization was created. The result of these operations is shown in Figure 8. At this stage the system capabilities allow to see the importance of the selected parameter at each point on the image. The color interpretation makes the image much more transparent. At the beginning, the image from Figure 7 is presented without divid-ing the days to draw a picture of the situation in its entirety (Fig. 9).

Fig. 7. Visualization of the observation “swimming pool” – traditional interpretation

Fig. 8. The image of multidimensional observation for “swimming pool” (standardized data)

In this figure all days of week are labeled sequen-tially, i.e 0 represents Monday, 1 – Tuesday, etc. To create this picture parallel coordinates were used. At first look, we can easily see that demand for services is much higher on weekends. The small number of people uses the pool in the morning. Evening hours are apparently not popular too. Furthermore, it is easy to see low demand by 15.00 on weekdays.

Fig. 9. Visualization of the observation “swimming pool” – spectral interpretation

To obtain more accurate information, authors di-vide the image into days (Fig. 10). Days of the week placed on the y-axis from Monday at the top, to Sunday at the bottom. Time intervals placed on x-axis. Now we see that the greatest demand occurs on Saturdays from 11.00 to 17.00 and Friday from 16.00 to 19.00. The lowest demand is on Mondays and Wednesdays. The number of people using the pool in the morning is not large, except Saturday, when demand is slightly higher. Furthermore, it is easy to see low demand by 15.00 on weekdays and after 15.00 on weekends. The lowest demand in the morning hours we notice on Monday and in the evening hours – on Friday.

Fig. 10. Visualization of the observation “swimming pool” – spectral interpretation divided into days

A different situation we would see if we analyze the SPA customer data (Tab. 2).

Table 2. SPA customers’ average attendance in 2011

Time intervals M T W T F S S

8.00 – 9.00 7.2 11.2 11.7 13.6 14.5 2.2 2.8

9.00 – 10.00 8.5 17.5 14.5 14.4 16.7 8.7 3.8

10.00 – 11.00 21.1 24.2 22.8 14.9 21.3 15.1 16.2

11.00 – 12.00 26.5 26.5 23.4 18.5 25.1 20.5 21.7

12.00 –13.00 20.3 22.7 27.6 20.2 34.4 22.1 25.1

13.00 – 14.00 27.5 20.6 31.9 28.5 37.5 28.5 25.4

14.00 – 15.00 24.7 16.6 32.7 35.3 39.8 33.9 27.3

15.00 – 16.00 33.5 35.4 35.5 28.4 42.2 26.3 25.9

16.00 – 17.00 35.6 31.5 37.5 27.7 35.5 23.3 23.7

17.00 – 18.00 24.2 26.9 34 22.2 34.7 21.1 22.1

18.00 – 19.00 21.3 26.7 21.7 19.7 30.2 24.5 15.5

19.00 – 20.00 17.5 22.3 14.5 10.2 27.1 15.3 8.5

20.00 – 21.00 6.3 7.7 11.1 11.2 14.1 12.4 7.6

21.00 – 22.00 5.4 5.8 7.4 6.3 5.5 6.6 3.1

22.00 – 23.00 0 0.5 0.7 0.7 1.3 2.7 0.5

Fig. 11. Visualization of the “SPA” observation – spec-tral interpretation

Page 9: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles8

The evening hours are the least loaded, while the highest demand is in the afternoon (Fig. 11). In the morning, the demand subsides only on weekends.

Analyzing the fitness data from Table 3, we see that the demand is lowest on Monday (Fig. 12).

Table 3. Fitness customers’ average frequency in 2011

Time intervals M T W T F S S

8.00 – 9.00 6.2 8.4 10.6 7.7 14.3 22.7 10.2

9.00 – 10.00 8.7 14.4 15.5 9.5 16.1 28.5 23.8

10.00 – 11.00 12.2 16.2 20.1 15.3 21.2 35.7 26.5

11.00 – 12.00 15.3 16.5 22.2 19.5 25.5 30.2 21.6

12.00 – 13.00 11.5 13.7 25.5 21.1 24.4 32.4 25.6

13.00 – 14.00 15.5 19.3 20.3 27.5 27.6 38.1 27.1

14.00 – 15.00 20.2 18.6 21.1 29.8 29.8 33.5 30.2

15.00 – 16.00 23.7 24.2 22.7 35.6 32.2 33.6 25.4

16.00 – 17.00 24.1 25.5 28.8 32.2 35.5 33.3 25.8

17.00 – 18.00 22.3 31.1 33.5 36.5 34.4 31.2 25.3

18.00 – 19.00 20.1 25.6 28.9 28.6 30.1 24.9 22.1

19.00 – 20.00 18 20.1 26.3 22.1 17.2 21.3 8.4

20.00 – 21.00 17.5 16.9 22.6 20.2 14.5 12.4 3.9

21.00 – 22.00 8.9 12.8 11.1 15.4 10.2 2.5 0.7

22.00 – 23.00 0.1 1.2 0.2 1.0 0.4 2.2 0.3

Generally, beginning of the week is not very popu-lar. Definitely more customers use gym services on Fridays and weekends. On Sunday the morning and south are popular. Demand for services decreases sig-nificantly every day at 19.00.

Fig. 12. Visualization of the “fitness” observation – spectral interpretation

After analyzing all services separately, it seems reasonable to compare them within a single image to create a promotional package. If we use traditional visualization we get a picture shown in Figure 13a, which allows you to inquire about the general trends of demand changes. The x-axis shows the hourly in-tervals, the y-axis – number of clients.

After making three-dimensional visualization we get a totally different picture. Figure 13b clearly shows the time intervals with the highest and lowest frequency. For each point on the surface, we can do analysis of the parameter value’s impact on the color of that point.

Observation of a multi-dimensional image works well in the detailed analysis of the data. However, if we want to find the source of a problem quickly, it is worthwhile to make a spectral interpretation. In Fig. 14 on the x-axis the first 7 columns reproduce de-mand for SPA services, next 7 – for the swimming pool service, the last – for the fitness service. The y-axis shows the hourly intervals. The data is neither nor-malized nor standardized, and therefore the demand for swimming pool in the figure is much higher than demand for other services.

Fig. 14. The observations of all services of the health and fitness centre

The figure allows you to see the similarity of SPA and fitness frequency. On the x-axis the first 7 col-umns show the demand for spa services, the next 7 – for pool service, the last 7 – for SPA service. The worst situation is in the evening (Fig. 15), especially on weekends. There is, therefore, a possibility to cre-ate a promotional offer by connecting these hours. In the morning, the demand for spa and fitness is differ-ent, so there is little point in joining the promotion in these hours. Because the number of customers is minimal from 22.00 to 23.00 in the entire center, it is useful to consider its closing an hour earlier.

Fig. 13a. Visualization of all observations of the health and fitness centre – traditional interpretation

Fig. 13b. Multidimensional image of all services of the health and fitness centre

Page 10: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 9

Fig. 15. The observations of all services of the health and fitness centre from 19.00 to 23.00

In the research experiment it was found that the visualization made using the color spectrum has a sig-nificant advantage over traditional visualization. The more observations were taken into account during the experiment the less clear the traditional charts were. The NovoSpark Corporation’s application en-abled not only to look at the situation in its entirety and immediately see the source of the problem, but also make detailed analysis of the selected aspects, for example to compare attendance at specific hours and days in order to make optimal decisions.

6. ConclusionBy introducing an element of cognitive intelli-

gence to traditional technologies modern cognitive graphics systems often contribute to the creation of absolutely new knowledge about the researched object. The research performed in this article repre-sents only a small fraction of the capabilities of cog-nitive graphics. These systems can become a unique tool to study patterns and processes of the imagina-tion in the future. So, they can make the competition with many systems using the neuromarketing and neuroeconomy concept, because cognitive graphics systems do not create the possibility of manipulation with mind. They just help people to find the source of the problem in a short period of time.

AUTHORSOlga Pilipczuk*– University of Szczecin, Mickiewicza 64, [email protected] Eidenzon – NovoSpark Corporation, 418 Dansbury Dr., Ontario, Waterloo, [email protected]

*Corresponding author

References

[1] Zenkin A., Кognitivnaya kompiuternayja grafika (Когнитивная компьютерная графика), Nau-ka: Мoscow, 1991.

[2] Y. Novoselov, „Primenenie kognitivnoi grafiki v sistemach podderzki priniatija reszenii dla bloka kompensatsii obiema na atomnych stancijach” (Применение когнитивной графики в систе-

мах поддержки принятия решений для блока компенсации объёма на атомных станциях). In: Proceedings of conference „Informacionnye sredstwa i technologii”, MEI, vol. II, 2008, pp. 65–69.

[3] T. Rakczeeva, Когнитивное представление ритмической структуры экг „Kognitinaya prezentacija ritmiczeskoi struktury EKG”, Programnye produkty i sistemy, no. 6, 1992, pp. 38–47.

[4] Bajaj Ch., Krishnamurthy B., Data Visualization Techniques, Wiley Computer Books, 1999.

[5] H. Chernoff, “The Use of Faces to Represent Points in K-Dimensional Space Graphically”. J. Am. Statist. Assoc., vol. 68, no. 342, 1973, pp. 361–368.

[6] Cleveland W., Visualizing Data, Hobart Press: New Jersey, 1993.

[7] D’Ocagne M., Coordonnées Parallèles et Axi-ales: Méthode de transformation géométrique et procédé nouveau de calcul graphique déduits de la considération des coordonnées parallèlles, Gauthier-Villars: Paris, 1885.

[8] Friedman V., Data Visualization and Infograph-ics, 14th Jan. 2008. Available at: http://www.smashingmagazine.com

[9] Friendly M., Milestones in the history of thematic cartography, statistical graphics, and data visu-alization, 2008 Available at:

http://www.math.yorku.ca/SCS/Gallery/mile-stone/milestone.pdf.

[10] Inselberg A., “The Plane with Parallel Coor-dinates”, Visual Computer, vol. 1, no. 4, 1985, pp. 69–91.

[11] Inselberg A., Parallel Coordinates: VISUAL Mul-tidimensional Geometry and its Applications, Springer, 2009.

[12] Fourier J., “Mémoire sur la propagation de la chaleur dans les corps solides”, Reprinted in “Mémoire sur la propagation de la chaleur dans les corps solides”, vol. 2, 1808, pp. 215–221.

[13] Andrews D.F., “Plots of High Dimensional Data”, Biometrics, vol. 28, 1972, pp. 125–136.

[14] Volovodenko V., Eidenzon D., Mylcev K., Метод и система визуализации многомерных объек-тов и процессов, VINITI, vol. 1471–В91, 1991, p. 13.

[15] Volovodenko V., Koryakina G., Математи-ческое обеспечение метода изображающих векторов „Мatematiczeskoye obespetchenye metoda izobragayusich vectorov”, VINITI, vol. 5213-80, 1980, p.13.

[16] NovoSpark Corporation. Multidimensional Data Visualization. NovoSpark Corporation Web Site. [Online] NovoSpark Corporation, 2007, http://www.novospark.com.

[17] Eidenzon D., Volovodenko V., Method for visu-alization of multidimensional data, U.S. Patent Application No. 20090252436, 2009.

Page 11: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles10

A Skeleton Rule-based Expert System Of New Generation

Władysław Brzozowski

Submitted: 31st July 2012; accepted: 8th April 2013

Abstract:The paper presents skeleton rule-based expert system of a new generation, named EXPERT 3.0, worked out and programmed by the Author. Notion of a new generation refers here to implementation of a knowledge base of the system in a form of a computer database; previous skel-eton expert systems implemented knowledge bases as text files. At first, a theory of expert systems, as one of the branches of Artificial Intelligence, is briefly presented. Then the Author’s original algorithms of the system are described in the paper. Using the EXPERT 3.0 system, ex-ecution of the inference processes: forward, backwards or mixed, as well as of falsification of the main hypothesis, is possible. The EXPERT 3.0 system may be loaded with any number of parallel knowledge bases from such domains as technical, medical or financial diagnostics, as well as providing equipment, forecast and many other systems; in the paper, the inference process is illustrated by an ex-ample of the diagnostics of the damage to a MKM33 coal mill, working in a 200 MW power unit. Finally, conclusions and recommendations are formulated in the paper.

Keywords: expert system, Artificial Intelligence, computer program, algorithm, inference process, fact, rule, techni-cal diagnostics

1. IntroductionExpert systems are one of the basic branches of an

Artificial Intelligence (AI). Development of AI dates from the sixties of XX century. Artificial neural networks were the first developed branch of AI. Expert systems appeared in the seventies. Then, the evolution algo-rithms and fuzzy logic systems took place. A very prom-ising new branch of AI, named the agents’ theory, and also new branch named data mining, have appeared in the last few years.

Among the AI branches described above, expert systems were the first with important practical applica-tions. The range of possible applications of the expert systems is very wide. Expert systems are mostly ap-plicable to diagnostics: technical, medical, economic-financial and other, even such exotic as archaeological [1]. Besides diagnostics, expert systems may be applied to: forecasting, providing equipment, improvement, re-pairing, planning, interpreting, monitoring, control and instruction (teaching).

Expert system is a computer program. Since the seventies, many such computer programs had been written. However, due to the hardware and software evolution, these programs have become obsolete. It is

necessary to constantly create new programs, written in new computer languages and developed in new com-puter environments. In this paper, EXPERT 3.0 comput-er program of the skeleton rule-based expert system of a new generation has been presented. This program has been worked out, written and developed personally by the Author. Notion of the new generation refers here to an implementation of the knowledge base of the system (as an element of AI), in a form of the computer database (as an element of computer science); previous skeleton expert systems, such as, for example, SOCRATES system [3], implemented the knowledge bases as text files.

Putting aside some mathematical formulae of an uncertainty management module, borrowed from so-lutions of the SOCRATES skeleton expert system [3], EXPERT 3.0 system is based on the Author’s entirely original algorithms. These algorithms are described be-low in the paper. An execution of inference processes: forward, backwards or mixed, as well as of falsification of the main hypothesis, is possible using of the EXPERT 3.0 system. The system is actually principally utilized as a didactic tool in AI domain, but economic and industri-al numerous applications of this system are also entire-ly possible. As an illustration of this thesis, a knowledge base of technical diagnostics of MKM 33 coal mill, work-ing in a 200 MW power unit [4, 5], has been loaded to EXPERT 3.0 system, and serves as an example of infer-ence process. After all, it is possible to load the EXPERT 3.0 system with any number of parallel expert systems (i.e. knowledge bases) from such domains as technical, medical or financial diagnostics, as well as providing equipment, forecast and many other systems.

2. The Algorithms of the EXPERT 3.0 System

The EXPERT 3.0 system is the rule-based expert sys-tem; knowledge in such a system is represented in form of facts and rules. The system comprises the following kinds of facts: introduced, intermediate and final, as well as following types of facts: enumerated, real, integer and logical. Value of introduced fact is determined by the user but not by the expert system. Value of intermediate or final fact (as opposed to introduced) is determined not by the user, but by the expert system, during the in-ference process, itself. Values of the final facts are an aim of the inference process. Enumerated fact is such a fact, which assumes the one value (or several values) from a set, predefined by the user (in the table of the values of enumerated facts, see below). Besides the enumerated facts, there are real and integer facts as well as logical facts, with values: TRUE, FALSE or UNKNOWN (the EX-PERT 3.0 system bases on three-valued logic).

Page 12: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 11

In the system, a rule has following syntax:

IF (PREMISE FIELD) THEN (CONCLUSION) (CF=?) (1)

where: (PREMISE FIELD) – complex logical expression on a single elementary premise, or on any number of el-ementary premises, joined together with logical opera-tors NOT, AND and/or OR, and also (optionally) closed in parenthesis on mutually nested hierarchical levels, in any number of these levels. The elementary premise is a comparison (of type equality or inequality) of cur-rent, actual value of single fact with its reference value. During the inference process, an elementary premise returns consequently logical value TRUE, FALSE or UNKNOWN. The reference value may be given in form of the constant. The constant may be simple (it repre-sents then a definite invariable value) or complex, so-called computational. The computational constant is an arithmetical expression on a value of another fact; re-spectively a value of computational constant may vary during the inference process. It is possible to compare values of two specified facts in the elementary premise using the computational constant;(CONCLUSION) – assignment specified value to a single, intermediate or final, fact. This assignment may be real-ized using a constant, simple or computational. In the EXPERT 3.0 system, a rule is so-called Horn’s clause, i.e. it consists of one and only one conclusion;(CF) – certainty factor of the rule. This is the variable from [0, 1] range, characterizing the confidence in the correctness of a rule. The CF equal to 1 means absolute correctness of a rule. The CF equal to 0 may not mean that the rule is incorrect, but means zero-confidence in the correctness of the rule. It is also possible to as-sign to the CF of the rule any intermediate value from [0, 1] range. The CF may not be interpreted as a prob-ability (for example, CF equal to 0,5 does not mean that in a half of cases the rule is correct and in a second half – incorrect). The CF should be attributed not only to the rule but also to the value of each introduced fact. A special uncertainty management module, built-in into the EXPERT 3.0 system, then, from the CF of values of facts in premises, will compute the replacement CF of entire premise field, and then, taking into account also the CF of the rule, will compute the CF of conclusion. If one and the same conclusion is deduced from two or more “fired” (the notion of “fire” will be explained be-low) rules, an aggregated CF of conclusion is further-more computed. The notion of logical t-norm and s-norm appears in some mathematical formulae of these calculations. The Author has borrowed these formulae from solutions of the SOCRATES skeleton expert sys-tem [3]. To make it possible to assign value to the fact in conclusion of the rule, this rule should be activated, or, according to the jargon of knowledge engineers, “fired”. The rule can be “fired”, if the logical value of premises field of this rule equals TRUE and the computed CF of conclusion of this rule is higher than a certain minimal threshold value. The notion of the CF is very important, taking into consideration general uncertainty with re-gard to different processes. The skeleton expert system, which does not consist of an uncertainty management

module, should not be utilized. Niederliński [8] calls such a system (i.e. system in which all CFs are, a priori, equal 1) precise. It is of course misunderstanding; such a system should rather be called inexact.

2.1. Forward Inference Process During forward inference process, the system deduces all possible conclusions taking into account actual values of introduced facts. The block-diagram in the Figure 1 presents algorithm of this process. In the blocks, the computations are realized according to formulae:

Block 1

( )( )0:CF ;unknown:F FIF

,0:CF ;unknown:F FFF

F

F

==∈∀==∈∀ (2)

where: F – fact; FF – set of final facts; FI – set of intermediate facts; CFF – CF of actual value of fact F.

Block 3If, for example, the premise field has a form:

F1 = RV1 AND (F2 = RV2 OR (F3 = RV3 AND F4 = RV4)) (3)

where:F1, F2, F3, F4 – facts (enumerated – in order to simplify the formulae);RV1, RV2, RV3, RV4 – reference values of the facts F1 - F4,

then order of parameter computations will be as follows:

1) logical value of the premise P3 : F3 = RV3, according to the truth table:

F3 = RV3 other than RV3 u3

LVp3 := T F U

where:T – TRUE; F – FALSE; U – UNKNOWN;RV3 – reference value of the fact F3;u3 – unknown actual value of the fact F3;LVp3 – logical value of the premise P3.

2) CF of the premise P3 : F3 = RV3, according to the formula:

(4)where:CFp3 – CF of the premise P3 : F3 = RV3; CFf3 – CF of actual value of the fact F3;CFt – threshold value of CF.

Page 13: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles12

3) logical value and CF of the premise P4 : F4 = RV4 (analogically as p. 1, 2 above);

4) logical value of the conjunction C3-4 : F3 = RV3 AND F4 = RV4, according to the truth table:

LVp3 = T F U

LVp4 = T F U T F U T F U

LVc3-4:= T F U F F F U F U

where:T – TRUE; F – FALSE; U – UNKNOWN;LVp3 – logical value of the premise P3 : F3 = RV3;LVp4 – logical value of the premise P4 : F4 = RV4;LVc3-4 – logical value of the conjunction C3-4 : F3 = RV3 AND F4 = RV4.

5) CF of the conjunction C3-4 : F3 = RV3 AND F4 = RV4, according to the formula (adapted from [3]):

where:t-l.n. – t – logical norm, chosen by the user, accor- dingly to a nature of the knowledge base;CFp3 – CF of the premise P3 : F3 = RV3;CFp4 – CF of the premise P4 : F4 = RV4;CFc3-4 – CF of the conjunction C3-4 : F3 = RV3 AND F4 = RV4.

6) logical value and CF of the premise P2 : F2 = RV2 (analogically as p. 1, 2 above);

START

1. Reset of the values of intermediate and final facts. Organization of program loop for iterations of the process. Acceptance of the first iteration to the calculation.

2. Organization of program loop for rules, decreed to forward inference process and sorted according to the increased level of rule, with elimination of already “fired” rules (if such a metarule has been admitted). Acceptance of the first rule to the calculation.

yes

yes no

3. Identification of elementary premises inside parenthesis deepest nested in the premise field of the rule. Reading and comparison of actual values of facts in premises. Computation of logical value (TRUE, FALSE or UNKNOWN) and CF of aggregated elementary premise repre-senting syntax of the parenthesis, taking into account priority of logical operators (NOT - AND – OR) and utilizing formulae of the uncertainty management module [3]. Replacement of the paren-thesis by the premise as above. Does any parenthesis still occur in the syntax of the premise field of the rule?

5. Transmission of the rule to a stack of rules to “firing”. If admitted metarule of order of “firing” of rules is FIFO (First In First Out), then “firing” of the rule. If another metarule is valid, then “firing” of rules is realized after closing of the program loop for rules. If the rule is “fired”, the form of conclusion of the rule is visualized, but if rules, with the same conclusion, have been “fired” be-fore, then additionally computation of aggregated CF of conclusion.

yes no

4. Computation of, similarly as in the block 3, logical value and CF of all premise field. Also computation of CF of conclusion of the rule, utilizing formulae of the uncer-tainty management module [3]. Does pre-mise field return value TRUE, and also, is computed CF of conclusion higher than threshold value?

A B C D A B C D

STOP

yes no

6. Has the last rule, from the program loop opened in the block 2, been already exa-mined? If not, then acceptance of the next rule to the calculation.

yes no

7. Has, in the current iteration, any new rule been “fired”, or also, has CF of conclusion of already earlier “fired” rule undergone augmentation more than by admitted thre-shold value? If yes, then acceptance of the next iteration to the calculation.

Fig. 1. The block-diagram of algorithm of forward infe-rence process in the EXPERT 3.0 system

START

1. Reset of the values of intermediate and final facts. Organization of program loop for iterations of the process. Acceptance of the first iteration to the calculation.

2. Organization of program loop for rules, decreed to forward inference process and sorted according to the increased level of rule, with elimination of already “fired” rules (if such a metarule has been admitted). Acceptance of the first rule to the calculation.

yes

yes no

3. Identification of elementary premises inside parenthesis deepest nested in the premise field of the rule. Reading and comparison of actual values of facts in premises. Computation of logical value (TRUE, FALSE or UNKNOWN) and CF of aggregated elementary premise repre-senting syntax of the parenthesis, taking into account priority of logical operators (NOT - AND – OR) and utilizing formulae of the uncertainty management module [3]. Replacement of the paren-thesis by the premise as above. Does any parenthesis still occur in the syntax of the premise field of the rule?

5. Transmission of the rule to a stack of rules to “firing”. If admitted metarule of order of “firing” of rules is FIFO (First In First Out), then “firing” of the rule. If another metarule is valid, then “firing” of rules is realized after closing of the program loop for rules. If the rule is “fired”, the form of conclusion of the rule is visualized, but if rules, with the same conclusion, have been “fired” be-fore, then additionally computation of aggregated CF of conclusion.

yes no

4. Computation of, similarly as in the block 3, logical value and CF of all premise field. Also computation of CF of conclusion of the rule, utilizing formulae of the uncer-tainty management module [3]. Does pre-mise field return value TRUE, and also, is computed CF of conclusion higher than threshold value?

A B C D A B C D

STOP

yes no

6. Has the last rule, from the program loop opened in the block 2, been already exa-mined? If not, then acceptance of the next rule to the calculation.

yes no

7. Has, in the current iteration, any new rule been “fired”, or also, has CF of conclusion of already earlier “fired” rule undergone augmentation more than by admitted thre-shold value? If yes, then acceptance of the next iteration to the calculation.

(5)

Page 14: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 13

7) logical value of the disjunction D2-3-4 : F2 = RV2 OR (F3 = RV3 AND F4 = RV4), according to the truth table:

LVp2 = T F U

LVc3-4 = T F U T F U T F U

LVd2-3-4:= T T T T F U T U U

where:T – TRUE; F – FALSE; U – UNKNOWN;LVp2 – logical value of the premise P2 : F2 = RV2;LVc3-4 – logical value of the conjunction C3-4 : F3 = RV3 AND F4 = RV4.LVd2-3-4 – logical value of the disjunction D2-3-4 : F2 = RV2 OR (F3 = RV3 AND F4 = RV4).

8) CF of the disjunction D2-3-4 : F2 = RV2 OR (F3 = RV3 AND F4 = RV4), according to the formula (adapted from [3]):

where:s-l.n. – s – logical norm, chosen by the user, accor ding-ly to a nature of the knowledge base (alg. – algebraic, Lukas. – Lukasiewicz);CFp2 – CF of the premise P2 : F2 = RV2;CFc3-4 – CF of the conjunction C3-4 : F3 = RV3 AND F4 = RV4;CFd2-3-4 – CF of the disjunction D2-3-4 : F2 = RV2 OR (F3 = RV3 AND F4 = RV4).

9) logical value and CF of the premise P1 : F1 = RV1 (analogically as p. 1, 2 above);

10) logical value of the conjunction C1-2-3-4 :F1 = RV1 AND (F2 = RV2 OR (F3 = RV3 AND F4 = RV4)) (analogically as p. 4 above);

11) CF of the conjunction C1-2-3-4 : F1 = RV1 AND (F2 = RV2 OR (F3 = RV3 AND F4 = RV4)) (analogically as p. 5 above).

Block4The same formulae as in the block 3 and additionally (adapted from [3]): CFconc = CFpf CFr (7)where:CFconc – CF of the conclusion of the rule;CFpf – CF of the premise field of the rule;CFr – CF of the rule.

Block 5If the conclusion has been deduced from two rules with the same conclusion, then (adapted from [3]):

(8)

where:s-l.n. – s – logical norm, chosen by the user, accor- dingly to a nature of the knowledge base (alg. – alge-braic, Lukas. – Lukasiewicz);CFaggr – aggregated CF of the conclusion deducedfrom two rules;CFconc1 – CF of the conclusion deduced from the first rule;CFconc2 – CF of the conclusion deduced from the second rule.

2.2. Backwards Inference ProcessBefore backwards or mixed inference process, the

user must put the main hypothesis. Then the system does not deduce all possible conclusions, but tries to prove only the main hypothesis. Only one of conclu-sions of rules, decreed to backward, or forward and backwards inference process, can be the main hypoth-esis. The block-diagram in the Figure 2 presents algo-rithm of backwards inference process. In the blocks, the computations are realized according to formulae:

Block 1The same formulae as for the forward inference process (Fig. 1, block 1).

Block 3The same formulae as for the forward inference process (Fig. 1, block 3–4) and additionally:

If, for example, the premise field of the examined rule has a form:

F1 = RV1 AND F2 = RV2 AND F3 = RV3 (9)

where:F1 – fact enumerated (in order to simplify the formulae) and introduced;F2 – fact enumerated (as above) and intermediate;F3 – fact enumerated (as above) and final;RV1, RV2, RV3 – reference values of the facts F1 – F3,

and the premise P1 : F1 = RV1 returns logical value TRUE, the premise P2 : F2 = RV2 returns also logical value TRUE, but the premise P3 : F3 = RV3 returns logical value UNKNOWN due to unknown actual value of the fact F3, or returns TRUE but CF of this premise is smaller than the threshold value, then this examined rule cannot be “fired”. At the same time, in the knowledge base, for example, there are following rules:……………………..Rule 1: IF (premise field) THEN F3=V3.1 (10)……………………..Rule 2: IF (premise field) THEN F3=V3.2 (11)……………………..

(6)

Page 15: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles14

where:V3.1, V3.2 – values (all) of the enumerated fact F3, predefined by the user (in the table of the values of enumerated facts, see below).

Consequently, the conclusions F3=V3.1 and F3=V3.2 will be auxiliary hypotheses. ID of the rules Rule 1 and Rule 2 will be added to the end of the work table of rules requiring examination.

Block 4The same formulae as for the forward inference process (Fig. 1, block 3–4) and as in the block 3 above.

Fig. 2. The block-diagram of algorithm of backwards inference process in the EXPERT 3.0 system

5

Block 3 The same formulae as for the forward inference process (Fig. 1, block 3-4) and additionally:

Fig. 2. The block-diagram of algorithm of backwards inference process in the EXPERT 3.0 system If, for example, the premise field of the examined rule has a form: F1 = RV1 AND F2 = RV2 AND F3 = RV3 (9) where: F1 – fact enumerated (in order to simplify the formulae) and introduced; F2 – fact enumerated (as above) and intermediate; F3 – fact enumerated (as above) and final; RV1, RV2, RV3 – reference values of the facts F1 – F3, and the premise P1 : F1 = RV1 returns logical value TRUE, the premise P2 : F2 = RV2 returns also

1. Reset of values of intermediate and final facts. Listing of the conclusions of rules, decreed to the backward inference process and sorted according to the increa-sed level of rule, to facilitate selection of the main hypothesis by the user. Indication of the main hypothesis by the user.

2. Organization of program loop for itera-tions of the process. Acceptance of the first iteration to the calculation.

3. Organization of program loop for rules, decreed to backwards inference process and sorted according to the increased level of rule, which conclusions agree with the main hypothesis. A try of “firing” of rules from this program loop, using the same algorithm as for the forward inference process (Fig. 1, blocks 2-7). Was the “firing” of any rule from this loop successful? If yes, the main hypothesis is proved and the inference process ends. If the system was not able to “fire” the examined rule, then identifi-cation of elementary premises inside pre-mise field, but with intermediate and final facts only. Computation of logical values and CFs of these premises, similarly as for the forward inference process (Fig. 1, block 3). If the premise returns UN-KNOWN, or CF of this premise is smaller than the threshold value, then identifica-tion of all rules, decreed to backwards inference process and having in conclu-sions the fact from examined premise. Conclusions of these rules are now considered as auxiliary hypotheses, and ID of these rules are added to the end of work table of rules requiring examination.

yes, examined rule has just been „fired”

no, not any rule has been “fired”

yes, at least one rule has been “fired”. The main hypothesis has been proved.

A B

A B

no, not any rule has been “fired”. The main hypothesis has not been proved

4. Organization of program loop for posi-tions of the work table of rules requiring examination, created in the block 3. Successive examination and try of “firing” of particular rules from this work table, using the same algorithm as for the forward inference process (Fig. 1, blocks 2–7). Was the “firing” of the current rule from this table successful? If yes, then immediate initialization of a new iteration of the process. If not, then identification of ele-mentary premises inside premise field, but with intermediate and final facts only, and possibly adding the new positions to the end of the work table of rules requiring examination, as it is described in the block 3. Examinations of rules from this work table will be continued. If, however, posi-tions of this work table will be exhausted, and not any rule from this work table has been “fired” then the inference process ends with a message that the main hypo-thesis has not been proved.

START

STOP

STOP

5

Block 3 The same formulae as for the forward inference process (Fig. 1, block 3-4) and additionally:

Fig. 2. The block-diagram of algorithm of backwards inference process in the EXPERT 3.0 system If, for example, the premise field of the examined rule has a form: F1 = RV1 AND F2 = RV2 AND F3 = RV3 (9) where: F1 – fact enumerated (in order to simplify the formulae) and introduced; F2 – fact enumerated (as above) and intermediate; F3 – fact enumerated (as above) and final; RV1, RV2, RV3 – reference values of the facts F1 – F3, and the premise P1 : F1 = RV1 returns logical value TRUE, the premise P2 : F2 = RV2 returns also

1. Reset of values of intermediate and final facts. Listing of the conclusions of rules, decreed to the backward inference process and sorted according to the increa-sed level of rule, to facilitate selection of the main hypothesis by the user. Indication of the main hypothesis by the user.

2. Organization of program loop for itera-tions of the process. Acceptance of the first iteration to the calculation.

3. Organization of program loop for rules, decreed to backwards inference process and sorted according to the increased level of rule, which conclusions agree with the main hypothesis. A try of “firing” of rules from this program loop, using the same algorithm as for the forward inference process (Fig. 1, blocks 2-7). Was the “firing” of any rule from this loop successful? If yes, the main hypothesis is proved and the inference process ends. If the system was not able to “fire” the examined rule, then identifi-cation of elementary premises inside pre-mise field, but with intermediate and final facts only. Computation of logical values and CFs of these premises, similarly as for the forward inference process (Fig. 1, block 3). If the premise returns UN-KNOWN, or CF of this premise is smaller than the threshold value, then identifica-tion of all rules, decreed to backwards inference process and having in conclu-sions the fact from examined premise. Conclusions of these rules are now considered as auxiliary hypotheses, and ID of these rules are added to the end of work table of rules requiring examination.

yes, examined rule has just been „fired”

no, not any rule has been “fired”

yes, at least one rule has been “fired”. The main hypothesis has been proved.

A B

A B

no, not any rule has been “fired”. The main hypothesis has not been proved

4. Organization of program loop for posi-tions of the work table of rules requiring examination, created in the block 3. Successive examination and try of “firing” of particular rules from this work table, using the same algorithm as for the forward inference process (Fig. 1, blocks 2–7). Was the “firing” of the current rule from this table successful? If yes, then immediate initialization of a new iteration of the process. If not, then identification of ele-mentary premises inside premise field, but with intermediate and final facts only, and possibly adding the new positions to the end of the work table of rules requiring examination, as it is described in the block 3. Examinations of rules from this work table will be continued. If, however, posi-tions of this work table will be exhausted, and not any rule from this work table has been “fired” then the inference process ends with a message that the main hypo-thesis has not been proved.

START

STOP

STOP

Page 16: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 15

2.3. Main Hypothesis Falsification ProcessIf the system was not able to prove the main hy-

pothesis (Fig. 2, block 4), it must not necessarily testify that this hypothesis is false. In this case, one should additionally try to realize falsification of the main hypothesis. To this end, one should prove an-other hypothesis that falsifies the main hypothesis. For example, if the main hypothesis (taken from the knowledge base of technical diagnostics of MKM 33 coal mill, working in the 200 MW power unit [4, 5]) is:

[Degree of failure of electric motor bearing No 1]=[Initial failure], (12)

then the hypothesis, that falsifies the main hypothesis, may be:

[Degree of failure of electric motor bearing No 1]=[Advanced failure]. (13)

This is the user’s duty to indicate the hypothesis that falsifies the main hypothesis. The system assists this process, proposing to the user, as hypothesis that falsifies the main hypothesis, these conclusions of rules, in which, to the same fact, as in the main hypothesis, other values are assigned. Algorithm of falsification of the main hypothesis is identical as for the backwards inference process – its block-diagram will not be repeated. Conclusion with the same fact, as in the main hypothesis, must not be always hypothesis that falsifies the main hypothesis. For example, if the main hypothesis, in the expert system of medical diagnostics, is:

[Patient’s disease]=[Meningitis], (14)

then the hypothesis, that falsifies the main hypothesis, may be:

[Patient’s state of health]=[Good health]. (15) Only the user can put such a hypothesis, and then may it prove using any kind of inference process.

2.4. Mixed Inference ProcessMixed inference process consists in realization for-

ward and backwards inference process alternately. In the EXPERT 3.0 system, the mixed inference process begins with the backwards inference process. If the system is not able to prove the main hypothesis during this process (Fig. 2, block 4), it automatically switches to the forward inference process (Fig. 1, blocks 1–7), but without an initial reset of the values of interme-diate and final facts, as well as without visualization of “fired” rules, others, than with conclusions staying in agreement with the main hypothesis. If the system is not able again to prove the main hypothesis dur-ing the forward inference process, the system auto-matically will switch again to the backwards inference process (Fig. 2, blocks 2–4), also without initial reset of the values of intermediate and final facts, and then will terminate inference process.

3. The computer Science Solutions of the EXPERT 3.0 System

The EXPERT 3.0 computer program has been writ-ten in Delphi 4 computer language and compiled in the RAD (Rapid Application Development) computer environment. The program is very wide; it consists of 115 forms/modules and 60.000 lines of source code. As mentioned above, the knowledge base of the sys-tem has been implemented in form of a computer database, using BDE (Borland Database Engine) data-base technology and Paradox local database system. A knowledge base has a form of computer database tables of: expert systems (i.e. knowledge bases), facts, values of enumerated facts, rules and constants. Two other tables, of: inference trajectories and multimedia files, are auxiliary tables. The database meets com-puter science requirement of so-called five normal shapes. The table of expert systems (i.e. knowledge bases) is connected with remaining tables by a one-to-many relationship. Additionally, the table of facts is also connected with the table of values of enumerated facts by the same relationship. Also the same relation-ship exists between the tables of facts/rules and the table of multimedia files.

Record of the table of expert systems (i.e. knowl-edge bases) consists of the following columns (fields): ID of the expert system (primary key); Name of the expert system; Description of the expert system; Date of creation of the record.

Record of the table of facts consists of the follow-ing columns (fields): ID of the fact (primary key); ID of the expert system (foreign key); Name of the fact; Kind of the fact (with items: introduced, intermediate, final); Type of the fact (with items: enumerated, real, integer, logical); Unit of measure (real and integer only); Number of values of the fact (20 max.; enumer-ated only); Is the fact multi-valued? (with items: yes, no; enumerated only); Description of the fact; Instruc-tion how to determine value of the fact (introduced only); Value of the fact (integer, real and logical only), introduced or intermediate/final at the end of the current iteration; CF of the value of the fact as above; Value of the intermediate/final fact (integer, real and logical only) at the end of the previous iteration; CF of the value of the fact as above; Date of creation of the record.

Record of the table of values of enumerated facts consists of the following columns (fields): ID of the value (primary key); ID of the fact (foreign key); Value of the fact; Does the fact (introduced or intermediate/final at the end of the current iteration) have this val-ue? (with items: yes, no); CF of the value of the fact as above; Did the intermediate/final fact have this value at the end of the previous iteration? (with items: yes, no); CF of the value of the fact as above; Date of cre-ation of the record.

Record of the table of rules consists of the follow-ing columns (fields): ID of the rule (primary key); ID of the expert system (foreign key); Name of the rule; Level of the rule; Inference sessions possible (with items: forward, backwards, both); CF of the rule; Text of the premise field of the rule; Text of the conclusion

Page 17: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles16

Fig. 3. The title form of the EXPERT 3.0 computer program

Fig. 4. The form of the table of facts of the EXPERT 3.0 computer program

Page 18: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 17

Fig. 5. The form of the table of rules of the EXPERT 3.0 computer program

Fig. 6. The form of the rule editor of the EXPERT 3.0 computer program

Page 19: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles18

Fig. 7. The form of visualization of conclusion of the EXPERT 3.0 program

Fig. 8. The form of the backwards inference process of the EXPERT 3.0 computer program

Page 20: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 19

of the rule; Description of the conclusion of the rule; Instructions/recommendations for user after “firing” of the rule; Description/source of the rule. Author(-s); Has the rule already been “fired”? (with items: yes, no); CF of the conclusion of the rule at the end of the current iteration; Is the rule still taken into account? (with items: yes, no); Date of creation of the record.

Record of the table of constants consists of the fol-lowing columns (fields): ID of the constant (primary key); ID of the expert system (foreign key); Name of the constant; Kind of the constant (with items: sim-ple, computational); Type of the constant (with items: real, integer, logical); Unit of measure (real and inte-ger only); Value of the constant (simple only); ID of the argument (fact) of the constant (computational only); Value of the parameter A of the constant (con-stant = A*argument+B; computational only); Value of the parameter B of the constant (as above); Descrip-tion of the constant; Instruction how to determine value of the constant (simple only); Date of creation of the record.

Record of the table of inference trajectories con-sists of the following columns (fields): ID of the tra-jectory step (primary key); ID of the expert system (foreign key); ID of the rule examined in this trajec-tory step; Current level of rules in this trajectory step; Current number of iteration of the inference pro-cess; Current direction of the inference process (with items: forward, backwards); Has the examined rule been “fired” in this trajectory step? (with items: yes, no); CF of the conclusion of the “fired” rule in this tra-jectory step; The examined rule has not been “fired” in this trajectory step due to: (with 9 different items); Date of creation of the record.

Record of the table of multimedia files consists of the following columns (fields): ID of the file (primary key); Name of the file (filename and extension); Kind of the file (with items: graphic/photo, film); Height/width of picture/screen of the file ratio; What does the file illustrate? (with items: fact, rule); ID of the il-lustrated fact/rule (foreign key); Description/caption of the graphic/photo/film; Date of creation of the re-cord.

In accordance with the notion of the skeleton ex-pert system, all tables are entirely programmed, but they are, in the distributed version of the system, emp-ty. This is user’s duty to fill these tables with a suitable proper knowledge from certain needed domain.

The structure of database of the EXPERT 3.0 sys-tem is presented below in the title form of the com-puter program (Fig. 3).

Parameters of facts are presented in the form of ta-ble of facts (Fig. 4). If determined predefined graphic image (in the table of multimedia files) is assigned to the value of the determined enumerated fact, it is pos-sible to create a rule with graphic premises. Conse-quently, the EXPERT 3.0 system may perform similar functions as the SCANKEE skeleton expert system [6]. In the EXPERT 3.0 system, multimedia files (graph-ics, photos and films) may be also used to instruct the user, regarding assignment values to introduced facts (for example, by measurement), and, particu-larly, regarding interpretation of conclusions and/or undertaking necessary activities (for example repairs, overhauls) resulting from the conclusions of the sys-tem (Fig. 7).

Parameters of rules are presented in the form of table of rules (Fig. 5) and in the form of rule editor

Fig. 9. The form of the main hypothesis falsification of the EXPERT 3.0 computer program

Page 21: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles20

(Fig. 6). The following rule, taken from the knowledge base of technical diagnostics of MKM 33 coal mill, working in the 200 MW power unit [4, 5], shown in these forms as an active record, may serve as an ex-ample of syntax of simple rule in the system EXPERT 3.0 (after translation into English):

IF[Is maximum envelope of vibration acceleration of electric motor bearing No 1 of mill less than level D?]=[true] AND [Is sum of envelope of vibration ac-celeration of electric motor bearing No 1 of mill less than level D?]=[true] AND [Root-mean-square value of sum of vibration speed of electric motor bearing No 1 of mill]>[1.1] AND [Root-mean-square value of sum of vibration speed of electric motor bearing No 1 of mill]<=[2,8] AND [Root-mean- square value of sum of vibration acceleration of electric motor bearing No 1 of mill]>[3] AND [Root-mean-square value of sum of vibration acceleration of electric motor bearing No 1 of mill]<=[7,5]THEN[Degree of failure of electric motor bearing No 1 of mill]=[Initial failure] (16)

Premise field of this rule consists of 6 elemen-tary premises, joined together with conjunction. In the syntax of each elementary premise, the name of fact, the comparison operator and the reference val-ue are present. Differently from the other skeleton expert systems, in which only illegible identifiers of facts occur in the rules, here full name of fact, with spaces, Polish diacritical letters and other characters, all enclosed in square brackets, are introduced into the rule. The rule becomes very readable. The same manner refers to the values of the facts, especially of enumerated facts. It may seem that generation of such a rule, considering necessity of introduction of full names of facts, is very arduous. On the contrary, this process is very simple, thanks to the rule editor built-in into the system. The name of the fact is only once introduced into the system (in the table of facts). Then, the rule editor copies names of the facts, clicked by the user from a list, to the different created rules.

It is also worth noticing that the logical facts in two first elementary premises in the shown above exemplary rule are so-called coverings, i.e. certain ag-gregated parameters computed by foreign computer programs, that usually mediate in preparation and transmission values of introduced facts into the ex-pert system. Such an organization considerably has-tens and facilitates realization of the inference pro-cess.

During the inference process, after each “firing” of rules, the form of visualization of rule conclusion is shown (Fig. 7). Before the backwards or mixed infer-ence process, the user selects the main hypothesis in the form, shown in the Fig. 8. Before the main hypoth-esis falsification process, the user selects hypothesis (hypotheses) that falsify the main hypothesis in the form, shown in the Fig. 9.

4. The Applications of the EXPERT 3.0 Program

As mentioned above, the program is actually principally utilized as a didactic tool in AI domain. Using this program, students created already hun-dreds of knowledge bases from different domains.

As an industrial application of this program, the knowledge base of technical diagnostics of MKM 33 coal mill, working in a 200 MW power unit, has been transferred from old SOCRATES skeleton ex-pert system [4, 5] and loaded to the EXPERT 3.0 system.

Technical diagnostics is a branch of science which evolved from the theory of exploitation, cy-bernetics and reliability. Among the many methods of technical diagnostics, the methods of vibration/acoustic diagnostics have found special applica-tions in the power plants. These methods are ap-plicable anywhere in a technological process where vibrations and noise occur and where device fail-ures may be the cause of these vibrations and noise.

The systems of vibration/acoustic diagnostics are particularly applicable to machines and rota-tional devices, such as turbo sets, feed water pumps, cooling water pumps, condensation pumps, coal mills, flue gas fans, air fans, mill fans and others. The computer in such systems processes signals of vibration displacement, collected from many sen-sors, differentiates these signals twice, takes them into the Fourier series and calculates amplitudes and RMS (root-mean-square) values of individual harmonics of vibration speed and acceleration (see the exemplary rule (16) above).

On the basis of these parameters, specialists in vibration/acoustic diagnostics decide about the technical state of a device, and especially about the presence and degree of progress of typical device failures. The purposes of this diagnostic analysis are: a) to lengthen the durability and life of the ma-terial in the machines; b) to determine principles for rational exploitation of the machines; c) to de-termine the scope of necessary replacement of the machines and devices; d) to ensure that damage does not reach a point which threatens power unit break-downs and the destruction of the machine or device; e) to determine the scope of maintenance work and its timing and f) to optimize maximum elongation of overhaul life. The range of knowledge and experience of vibration/acoustic diagnostics is already enormous. In order to make this knowledge accessible and to utilize it, one must use methods of knowledge engineering. Expert systems are es-pecially applicable here.

The knowledge base of technical diagnostics of MKM 33 coal mill consists of above 500 rules and 300 facts (in this number: 200 introduced facts, principally vibration/acoustic parameters). The system diagnoses above 50 elementary failures of coal mill. The system was utilized in one of the big Polish power plant [4, 5].

Page 22: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 21

5. Conclusions. Recommendations Expert systems are important tools in many branch-

es of world/national economy and industry, especially as the systems of technical, medical and financial diag-nostics.

It is recommended to utilize modern computer pro-grams of expert systems, written and developed in con-temporary computer languages and RAD computer en-vironments, as user friendly programs. Such programs should utilize computer database, as modern comput-er science tool, to load parameters of knowledge base.

Expert system should consist of the uncertainty management module.

Expert system should make it possible to realize the forward, backwards and mixed inference process, as well as the main hypothesis falsification process.

AUTHOR

Władysław Brzozowski – Retired associate profes-sor (in a domain of electric power engineering and also of computer science –Artificial Intelligence) of Częstochowa University of Technology, Institute of Electric Power Engineering in Częstochowa (Poland); Retired associate professor (in a domain of computer science – computer languages and programming) of The Gen. Jerzy Ziętek Silesian (Higher) School of Man-agement in Katowice (Poland);E-mail address for correspondence: [email protected].

REFERENCES

[1] Gondran M., Introduction aux systèmes experts, Eyrolles, Paris 1984.

[2] Cholewa W., Pedrycz W., Systemy doradcze, Skrypt nr. 1447, Politechnika Śląska, Gliwice 1987. (in Polish)

[3] Iwański C., Instrukcja obsługi systemu szkiele-towego SOCRATES, IBS PAN, Warszawa, 1990. (in Polish)

[4] Brzozowski W., Dziemidowicz J., „Ekspercki system diagnostyki i remontów młynów węglowych”. In: Materiały VI Międzynarodowej Konferencji Naukowej „Aktualne Problemy w Elektroenergetyce”, KE PAN O/Katowice – IEiSU Politechnika Śląska, Gliwice – Kozubnik, 1993. (in Polish)

[5] Brzozowski W., „Struktury i metody po-zyskiwania wiedzy w systemie diagno-styki technicznej na przykładzie syste-mu eksperckiego EKSPEM”. In: ed. Ku-likowski R., Bogdan L., Wspomaganie decyzji. Systemy Eksperckie, Materiały Konfe-rencji, IBS PAN, Warszawa, 1995. (in Polish)

[6] Guzowska-Świder B., „Zastosowanie systemu ekspertowego SCANKEE do symulowania pro-cesu analizy chemicznej”, red.: Kulikowski R.,

Bogdan L., Wspomaganie decyzji. Systemy Eksperckie, Materiały Konferencji, IBS PAN, Warszawa, 1995. (in Polish)

[7] Mulawka J.J., Systemy ekspertowe, WNT, War-szawa, 1996. (in Polish)

[8] Niederliński A., Regułowe systemy ekspertowe, Wydawnictwo Pracowni Komputerowej Jacka Skalmierskiego, Gliwice, 2000. (in Polish)

[9] Brzozowski W., “Expert Systems of the Tech-nical Diagnostics in Power Plants”, Przegląd Elektrotechniczny, no. 3, 2009, pp. 124–129.

[10] Barzilay R. and others, A new approach to expert system explanations, 2012 (http://www.cogentex.com/papers/explanation-iwnlg98.pdf).

Page 23: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles22

Map Construction And Localization Using Lego Mindstorms Nxt

Jasmeet Singh, Punam Bedi

Submitted: 7th August 2012; accepted: 17th May 2013

Map Construction And Localization Using Lego Mindstorms Nxt Jasmeet Singh, Punam Bedi

Subm.7th August 2012; accepted 17thMay 2013 Abstract: Maps are very useful for understanding unknown places before visiting them as maps represent spatial relation-ships between various objects in a region. Using robots for map construction is an important field these days as robots can reach places which may be inaccessible to human beings. This paper presents a method to use the data obtained from a single ultrasonic sensor mounted on a robot, to construct a map and localize the robot within that map. Map of the previously unknown envi-ronment is created with the help of a mobile robot, built using Lego Mindstorms NXT assembled in a modified TriBot configuration. The robot is equipped with an ultrasonic sensor and is controlled from a computer system running a MATLAB program, which communi-cates with the NXT over a USB or Bluetooth connection and performs complex calculations that are not possible for the NXT itself. After the map construction, the robot finds its position in the map by using a particle filter. Implementation has been done in MATLAB program-ming environment using RWTH – Mindstorms NXT Toolbox and has been successfully tested for map con-struction of a room and localization within that room with the use of a TriBot. Keywords: Lego Mindstorms NXT, TriBot, ultrasonic sensor, map construction, localization, particle filter, MATLAB, mobile robot, RWTH – Mindstorms NXT Toolbox 1. Introduction

Maps are used in a variety of applications for our day-to-day needs. These maps can be considered as macro-scopes which are used primarily to represent spatial relationships between various objects in a region, on a smaller-scale. Constructing a geomet-rically consistent map is a tedious process that re-quires determining spatial relationship between various objects, which in turn requires a lot of meas-urements. The presented work describes a method to construct a map using a Lego Mindstorms NXT based TriBot (referred here-on as simply NXT or TriBot), which has been successfully tested for creat-ing an indoor map of a room.

This paper is divided into two main sections of Map Construction and Localization and the Kid-napped Robot Problem. The map construction and localization section describes how the robot is able to take readings of its surroundings, move in the

environment, and also how the constructed global map is displayed based on the readings obtained by the ultrasonic sensor of the TriBot. The latter section describes how the robot’s position is found within the constructed map, after it has been placed at a random position within the mapped environment, by using the readings obtained from its sensors and its knowledge of the map.

Previous work in localization of robots has been carried out with the use of more than one ultrasonic sensor at once [1–3]. This requires the robot to know which sensor returned the reading and hence the robot needs to distinguish between several ultrason-ic sensors that were mounted on it. With the use of multiple sensors, the cost of the robot increases as well. The work carried out in this paper requires the use of only a single ultrasonic sensor for localization, placed at the origin of the robot reference frame thereby eliminating additional computations for finding the relative positions of all the ultrasonic sensors, with respect to the reference frame, as in [1, 3]. When using particle filters for solving the global localization [1, 4], the map given as an input to the particle filter algorithm, was not built by the robot itself [1] and was constructed by the user via other techniques. But here the TriBot itself con-structs the global map that is later used providing a solution to the kidnapped robot problem [4] de-scribed in the localization s7th ection.

Mapping with ultrasonic sensors has previously been reliant on the use of multiple sensors in ring formation [5]. With the use of a single ultrasonic sensor it becomes possible to successfully map the environment surrounding the NXT, while keeping the computational complexity to a minimum. It also helps in specifying the number of readings that must be received from the surroundings for each position of NXT, without having to change the number of ultrasonic sensors in the robot’s hardware.

For global map construction and localization, the concept of occupancy grids has also been previously employed in [6, 7] which, though effective, depends on the size of the grid cells. A robot when moved to a frontier is able to see the unexplored area of the environment. By constantly moving to successive frontiers the robot is able to increase its knowledge of the surrounding environment. Frontier based exploration have been coupled with occupancy grids for the map construction and localization in [6] but in the implementation of such a system multiple laser range finders, sonar sensors and infrared sen-

Map Construction And Localization Using Lego Mindstorms Nxt Jasmeet Singh, Punam Bedi

Subm.7th August 2012; accepted 17thMay 2013 Abstract: Maps are very useful for understanding unknown places before visiting them as maps represent spatial relation-ships between various objects in a region. Using robots for map construction is an important field these days as robots can reach places which may be inaccessible to human beings. This paper presents a method to use the data obtained from a single ultrasonic sensor mounted on a robot, to construct a map and localize the robot within that map. Map of the previously unknown envi-ronment is created with the help of a mobile robot, built using Lego Mindstorms NXT assembled in a modified TriBot configuration. The robot is equipped with an ultrasonic sensor and is controlled from a computer system running a MATLAB program, which communi-cates with the NXT over a USB or Bluetooth connection and performs complex calculations that are not possible for the NXT itself. After the map construction, the robot finds its position in the map by using a particle filter. Implementation has been done in MATLAB program-ming environment using RWTH – Mindstorms NXT Toolbox and has been successfully tested for map con-struction of a room and localization within that room with the use of a TriBot. Keywords: Lego Mindstorms NXT, TriBot, ultrasonic sensor, map construction, localization, particle filter, MATLAB, mobile robot, RWTH – Mindstorms NXT Toolbox 1. Introduction

Maps are used in a variety of applications for our day-to-day needs. These maps can be considered as macro-scopes which are used primarily to represent spatial relationships between various objects in a region, on a smaller-scale. Constructing a geomet-rically consistent map is a tedious process that re-quires determining spatial relationship between various objects, which in turn requires a lot of meas-urements. The presented work describes a method to construct a map using a Lego Mindstorms NXT based TriBot (referred here-on as simply NXT or TriBot), which has been successfully tested for creat-ing an indoor map of a room.

This paper is divided into two main sections of Map Construction and Localization and the Kid-napped Robot Problem. The map construction and localization section describes how the robot is able to take readings of its surroundings, move in the

environment, and also how the constructed global map is displayed based on the readings obtained by the ultrasonic sensor of the TriBot. The latter section describes how the robot’s position is found within the constructed map, after it has been placed at a random position within the mapped environment, by using the readings obtained from its sensors and its knowledge of the map.

Previous work in localization of robots has been carried out with the use of more than one ultrasonic sensor at once [1–3]. This requires the robot to know which sensor returned the reading and hence the robot needs to distinguish between several ultrason-ic sensors that were mounted on it. With the use of multiple sensors, the cost of the robot increases as well. The work carried out in this paper requires the use of only a single ultrasonic sensor for localization, placed at the origin of the robot reference frame thereby eliminating additional computations for finding the relative positions of all the ultrasonic sensors, with respect to the reference frame, as in [1, 3]. When using particle filters for solving the global localization [1, 4], the map given as an input to the particle filter algorithm, was not built by the robot itself [1] and was constructed by the user via other techniques. But here the TriBot itself con-structs the global map that is later used providing a solution to the kidnapped robot problem [4] de-scribed in the localization s7th ection.

Mapping with ultrasonic sensors has previously been reliant on the use of multiple sensors in ring formation [5]. With the use of a single ultrasonic sensor it becomes possible to successfully map the environment surrounding the NXT, while keeping the computational complexity to a minimum. It also helps in specifying the number of readings that must be received from the surroundings for each position of NXT, without having to change the number of ultrasonic sensors in the robot’s hardware.

For global map construction and localization, the concept of occupancy grids has also been previously employed in [6, 7] which, though effective, depends on the size of the grid cells. A robot when moved to a frontier is able to see the unexplored area of the environment. By constantly moving to successive frontiers the robot is able to increase its knowledge of the surrounding environment. Frontier based exploration have been coupled with occupancy grids for the map construction and localization in [6] but in the implementation of such a system multiple laser range finders, sonar sensors and infrared sen-

Map Construction And Localization Using Lego Mindstorms Nxt Jasmeet Singh, Punam Bedi

Subm.7th August 2012; accepted 17thMay 2013 Abstract: Maps are very useful for understanding unknown places before visiting them as maps represent spatial relation-ships between various objects in a region. Using robots for map construction is an important field these days as robots can reach places which may be inaccessible to human beings. This paper presents a method to use the data obtained from a single ultrasonic sensor mounted on a robot, to construct a map and localize the robot within that map. Map of the previously unknown envi-ronment is created with the help of a mobile robot, built using Lego Mindstorms NXT assembled in a modified TriBot configuration. The robot is equipped with an ultrasonic sensor and is controlled from a computer system running a MATLAB program, which communi-cates with the NXT over a USB or Bluetooth connection and performs complex calculations that are not possible for the NXT itself. After the map construction, the robot finds its position in the map by using a particle filter. Implementation has been done in MATLAB program-ming environment using RWTH – Mindstorms NXT Toolbox and has been successfully tested for map con-struction of a room and localization within that room with the use of a TriBot. Keywords: Lego Mindstorms NXT, TriBot, ultrasonic sensor, map construction, localization, particle filter, MATLAB, mobile robot, RWTH – Mindstorms NXT Toolbox 1. Introduction

Maps are used in a variety of applications for our day-to-day needs. These maps can be considered as macro-scopes which are used primarily to represent spatial relationships between various objects in a region, on a smaller-scale. Constructing a geomet-rically consistent map is a tedious process that re-quires determining spatial relationship between various objects, which in turn requires a lot of meas-urements. The presented work describes a method to construct a map using a Lego Mindstorms NXT based TriBot (referred here-on as simply NXT or TriBot), which has been successfully tested for creat-ing an indoor map of a room.

This paper is divided into two main sections of Map Construction and Localization and the Kid-napped Robot Problem. The map construction and localization section describes how the robot is able to take readings of its surroundings, move in the

environment, and also how the constructed global map is displayed based on the readings obtained by the ultrasonic sensor of the TriBot. The latter section describes how the robot’s position is found within the constructed map, after it has been placed at a random position within the mapped environment, by using the readings obtained from its sensors and its knowledge of the map.

Previous work in localization of robots has been carried out with the use of more than one ultrasonic sensor at once [1–3]. This requires the robot to know which sensor returned the reading and hence the robot needs to distinguish between several ultrason-ic sensors that were mounted on it. With the use of multiple sensors, the cost of the robot increases as well. The work carried out in this paper requires the use of only a single ultrasonic sensor for localization, placed at the origin of the robot reference frame thereby eliminating additional computations for finding the relative positions of all the ultrasonic sensors, with respect to the reference frame, as in [1, 3]. When using particle filters for solving the global localization [1, 4], the map given as an input to the particle filter algorithm, was not built by the robot itself [1] and was constructed by the user via other techniques. But here the TriBot itself con-structs the global map that is later used providing a solution to the kidnapped robot problem [4] de-scribed in the localization s7th ection.

Mapping with ultrasonic sensors has previously been reliant on the use of multiple sensors in ring formation [5]. With the use of a single ultrasonic sensor it becomes possible to successfully map the environment surrounding the NXT, while keeping the computational complexity to a minimum. It also helps in specifying the number of readings that must be received from the surroundings for each position of NXT, without having to change the number of ultrasonic sensors in the robot’s hardware.

For global map construction and localization, the concept of occupancy grids has also been previously employed in [6, 7] which, though effective, depends on the size of the grid cells. A robot when moved to a frontier is able to see the unexplored area of the environment. By constantly moving to successive frontiers the robot is able to increase its knowledge of the surrounding environment. Frontier based exploration have been coupled with occupancy grids for the map construction and localization in [6] but in the implementation of such a system multiple laser range finders, sonar sensors and infrared sen-

Page 24: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 23

sors were used to make the resulting map as accu-rate as possible. The amount of computation and complexity increases with multiple sensors, but with the use of a single ultrasonic sensor amount of com-putation is low. In this work frontier-based explora-tion strategy has been employed without using the occupancy grids and only the readings obtained at the current position of the robot are used for decid-ing the next frontier for the robot.

In other works of map construction and localiza-tion, Simultaneous Localization and Mapping (SLAM) has been used to localize the robot’s position as it moves within the environment, while mapping is being performed [3, 4, 8]. In the current system, each robot position is relative to the origin position of the map and the robot has no knowledge of a prior map. It is also possible to implement map construction with a swarm of robots in the same environment [8] to speed up the process but the implementation has been limited to a single robot so as to ensure that a single robot is able to perform the task of map creation efficiently. It is also easier for the computer to keep track of a single robot.

2. Background 2.1. Environment

The environment considered for the implementa-tion is partially observable, with a single robot work-ing in a continuous state space. The actual environ-ment is an empty room created by temporary parti-tions that is available specifically to the robot and has no human interference, for the purposes of the given experiments. 2.2. Assumptions Taken — The walls of the boundary of room and the ob-

jects within the room are immovable. — Friction between the floor of room and the

wheels of robot is just enough so that, during the execution of a turn at a position, the stationary wheel of the robot does not slip.

— For map construction, the TriBot’s first location is assumed to be (0, 0) with a 0 degree orientation.

2.3. Software MATLAB is used for the purposes of moving the

robot, taking readings of the local surrounding envi-ronment, performing calculations for map creation and localization, and displaying the created global map.

The RWTH – Mindstorms NXT Toolbox [9], ver-sion 4.04 for MATLAB, establishes a connection from the program, running on the computer, to the TriBot. With the use of this toolbox it possible to control all the motors and sensors attached to the TriBot. Hence, it is possible to move the robot by controlling its wheeled motors, use the ultrasonic sensor to detect distances from objects, and also rotate the motor on which the ultrasonic sensor is mounted, through the MATLAB program.

For controlling of the wheeled motors and de-termining the robot’s movements, it is important to find the number of degrees the motors must rotate to be able to travel a certain distance. The diameter

of the wheels used is 5.5 cm and hence for a 360 degree revolution of the wheels the robot moves forward by 5.5*π centimeters. The degree of revolution, for the wheeled motors, is calculated according to the distance in centimeters that the robot is required to move.

To use the ultrasonic sensor it is necessary to first establish a serial connection to the NXT brick and to its port on which the ultrasonic sensor is at-tached. By using this port, the distance readings recorded by the sensor are returned to the MATLAB program via the USB or Bluetooth connection. 2.4. Sensor and Motors

The ultrasonic sensor used with the TriBot measures the time-of-flight of a burst of high fre-quency sound wave. The value returned by the sen-sor is a single number representing the straight line distance of an object from the sensor. These values are in centimeters, and are read by the MATLAB program. The ultrasonic sensor is able to detect dis-tances ranging from 4 centimeters to 2.3 meters with an average precision of 3 centimeters and returns better readings when the object to be detected has a hard surface. If a wave hits the surface at a wide angle from the surface normal [5] it may not return to the sensor, in which case a value of 255 is received from the sensor, indicating an error. The wave may also bounce off a separate surface and create a ghost image.

The interactive servo motors of the TriBot are equipped with an integrated encoder which records a count value of 360 for a single circular revolution i.e., when the motor rotates by 360 degrees, a count of 360 is recoded. This encoder has an accuracy of 1 degree. The motor is able to return this count to the program via the serial connection. These motors have a default maximum speed of 1000 encoder counts per second which means the robot can travel in a straight line at approximately 48 centimeters per second. For the purpose of experimentation, the wheeled motors have a fixed speed of 800 encoder counts per second (approximately 38.4 centime-ters/second) and maintain a constant torque during motion. The program can also set the tacho limit of the motor, which specifies the integer number of encoder counts (or degrees) that a motor executes before coming to an abrupt stop when this limit is reached. In order for the NXT to move one centime-ter in the forward direction, both the motors must move in the same direction by a tacho limit of ap-proximately 21 encoder counts. The two wheels are 11 centimeters apart and so to turn the robot by 1 degree, while moving one wheel and keeping the other wheel stationery, the motor must move by a tacho limit of 4. The motor on which the ultrasonic sensor is mounted rotates at a speed of approximate-ly 400 counts per second. So, a 360 degree rotation of the ultrasonic sensor and detection of distances after every 6 degrees takes 30 seconds, on average. 2.5. Robot Used

The robot used for the purposes of experimenta-tion is based on the Lego Mindstorms NXT TriBot,

sors were used to make the resulting map as accu-rate as possible. The amount of computation and complexity increases with multiple sensors, but with the use of a single ultrasonic sensor amount of com-putation is low. In this work frontier-based explora-tion strategy has been employed without using the occupancy grids and only the readings obtained at the current position of the robot are used for decid-ing the next frontier for the robot.

In other works of map construction and localiza-tion, Simultaneous Localization and Mapping (SLAM) has been used to localize the robot’s position as it moves within the environment, while mapping is being performed [3, 4, 8]. In the current system, each robot position is relative to the origin position of the map and the robot has no knowledge of a prior map. It is also possible to implement map construction with a swarm of robots in the same environment [8] to speed up the process but the implementation has been limited to a single robot so as to ensure that a single robot is able to perform the task of map creation efficiently. It is also easier for the computer to keep track of a single robot.

2. Background 2.1. Environment

The environment considered for the implementa-tion is partially observable, with a single robot work-ing in a continuous state space. The actual environ-ment is an empty room created by temporary parti-tions that is available specifically to the robot and has no human interference, for the purposes of the given experiments. 2.2. Assumptions Taken — The walls of the boundary of room and the ob-

jects within the room are immovable. — Friction between the floor of room and the

wheels of robot is just enough so that, during the execution of a turn at a position, the stationary wheel of the robot does not slip.

— For map construction, the TriBot’s first location is assumed to be (0, 0) with a 0 degree orientation.

2.3. Software MATLAB is used for the purposes of moving the

robot, taking readings of the local surrounding envi-ronment, performing calculations for map creation and localization, and displaying the created global map.

The RWTH – Mindstorms NXT Toolbox [9], ver-sion 4.04 for MATLAB, establishes a connection from the program, running on the computer, to the TriBot. With the use of this toolbox it possible to control all the motors and sensors attached to the TriBot. Hence, it is possible to move the robot by controlling its wheeled motors, use the ultrasonic sensor to detect distances from objects, and also rotate the motor on which the ultrasonic sensor is mounted, through the MATLAB program.

For controlling of the wheeled motors and de-termining the robot’s movements, it is important to find the number of degrees the motors must rotate to be able to travel a certain distance. The diameter

of the wheels used is 5.5 cm and hence for a 360 degree revolution of the wheels the robot moves forward by 5.5*π centimeters. The degree of revolution, for the wheeled motors, is calculated according to the distance in centimeters that the robot is required to move.

To use the ultrasonic sensor it is necessary to first establish a serial connection to the NXT brick and to its port on which the ultrasonic sensor is at-tached. By using this port, the distance readings recorded by the sensor are returned to the MATLAB program via the USB or Bluetooth connection. 2.4. Sensor and Motors

The ultrasonic sensor used with the TriBot measures the time-of-flight of a burst of high fre-quency sound wave. The value returned by the sen-sor is a single number representing the straight line distance of an object from the sensor. These values are in centimeters, and are read by the MATLAB program. The ultrasonic sensor is able to detect dis-tances ranging from 4 centimeters to 2.3 meters with an average precision of 3 centimeters and returns better readings when the object to be detected has a hard surface. If a wave hits the surface at a wide angle from the surface normal [5] it may not return to the sensor, in which case a value of 255 is received from the sensor, indicating an error. The wave may also bounce off a separate surface and create a ghost image.

The interactive servo motors of the TriBot are equipped with an integrated encoder which records a count value of 360 for a single circular revolution i.e., when the motor rotates by 360 degrees, a count of 360 is recoded. This encoder has an accuracy of 1 degree. The motor is able to return this count to the program via the serial connection. These motors have a default maximum speed of 1000 encoder counts per second which means the robot can travel in a straight line at approximately 48 centimeters per second. For the purpose of experimentation, the wheeled motors have a fixed speed of 800 encoder counts per second (approximately 38.4 centime-ters/second) and maintain a constant torque during motion. The program can also set the tacho limit of the motor, which specifies the integer number of encoder counts (or degrees) that a motor executes before coming to an abrupt stop when this limit is reached. In order for the NXT to move one centime-ter in the forward direction, both the motors must move in the same direction by a tacho limit of ap-proximately 21 encoder counts. The two wheels are 11 centimeters apart and so to turn the robot by 1 degree, while moving one wheel and keeping the other wheel stationery, the motor must move by a tacho limit of 4. The motor on which the ultrasonic sensor is mounted rotates at a speed of approximate-ly 400 counts per second. So, a 360 degree rotation of the ultrasonic sensor and detection of distances after every 6 degrees takes 30 seconds, on average. 2.5. Robot Used

The robot used for the purposes of experimenta-tion is based on the Lego Mindstorms NXT TriBot,

sors were used to make the resulting map as accu-rate as possible. The amount of computation and complexity increases with multiple sensors, but with the use of a single ultrasonic sensor amount of com-putation is low. In this work frontier-based explora-tion strategy has been employed without using the occupancy grids and only the readings obtained at the current position of the robot are used for decid-ing the next frontier for the robot.

In other works of map construction and localiza-tion, Simultaneous Localization and Mapping (SLAM) has been used to localize the robot’s position as it moves within the environment, while mapping is being performed [3, 4, 8]. In the current system, each robot position is relative to the origin position of the map and the robot has no knowledge of a prior map. It is also possible to implement map construction with a swarm of robots in the same environment [8] to speed up the process but the implementation has been limited to a single robot so as to ensure that a single robot is able to perform the task of map creation efficiently. It is also easier for the computer to keep track of a single robot.

2. Background 2.1. Environment

The environment considered for the implementa-tion is partially observable, with a single robot work-ing in a continuous state space. The actual environ-ment is an empty room created by temporary parti-tions that is available specifically to the robot and has no human interference, for the purposes of the given experiments. 2.2. Assumptions Taken — The walls of the boundary of room and the ob-

jects within the room are immovable. — Friction between the floor of room and the

wheels of robot is just enough so that, during the execution of a turn at a position, the stationary wheel of the robot does not slip.

— For map construction, the TriBot’s first location is assumed to be (0, 0) with a 0 degree orientation.

2.3. Software MATLAB is used for the purposes of moving the

robot, taking readings of the local surrounding envi-ronment, performing calculations for map creation and localization, and displaying the created global map.

The RWTH – Mindstorms NXT Toolbox [9], ver-sion 4.04 for MATLAB, establishes a connection from the program, running on the computer, to the TriBot. With the use of this toolbox it possible to control all the motors and sensors attached to the TriBot. Hence, it is possible to move the robot by controlling its wheeled motors, use the ultrasonic sensor to detect distances from objects, and also rotate the motor on which the ultrasonic sensor is mounted, through the MATLAB program.

For controlling of the wheeled motors and de-termining the robot’s movements, it is important to find the number of degrees the motors must rotate to be able to travel a certain distance. The diameter

of the wheels used is 5.5 cm and hence for a 360 degree revolution of the wheels the robot moves forward by 5.5*π centimeters. The degree of revolution, for the wheeled motors, is calculated according to the distance in centimeters that the robot is required to move.

To use the ultrasonic sensor it is necessary to first establish a serial connection to the NXT brick and to its port on which the ultrasonic sensor is at-tached. By using this port, the distance readings recorded by the sensor are returned to the MATLAB program via the USB or Bluetooth connection. 2.4. Sensor and Motors

The ultrasonic sensor used with the TriBot measures the time-of-flight of a burst of high fre-quency sound wave. The value returned by the sen-sor is a single number representing the straight line distance of an object from the sensor. These values are in centimeters, and are read by the MATLAB program. The ultrasonic sensor is able to detect dis-tances ranging from 4 centimeters to 2.3 meters with an average precision of 3 centimeters and returns better readings when the object to be detected has a hard surface. If a wave hits the surface at a wide angle from the surface normal [5] it may not return to the sensor, in which case a value of 255 is received from the sensor, indicating an error. The wave may also bounce off a separate surface and create a ghost image.

The interactive servo motors of the TriBot are equipped with an integrated encoder which records a count value of 360 for a single circular revolution i.e., when the motor rotates by 360 degrees, a count of 360 is recoded. This encoder has an accuracy of 1 degree. The motor is able to return this count to the program via the serial connection. These motors have a default maximum speed of 1000 encoder counts per second which means the robot can travel in a straight line at approximately 48 centimeters per second. For the purpose of experimentation, the wheeled motors have a fixed speed of 800 encoder counts per second (approximately 38.4 centime-ters/second) and maintain a constant torque during motion. The program can also set the tacho limit of the motor, which specifies the integer number of encoder counts (or degrees) that a motor executes before coming to an abrupt stop when this limit is reached. In order for the NXT to move one centime-ter in the forward direction, both the motors must move in the same direction by a tacho limit of ap-proximately 21 encoder counts. The two wheels are 11 centimeters apart and so to turn the robot by 1 degree, while moving one wheel and keeping the other wheel stationery, the motor must move by a tacho limit of 4. The motor on which the ultrasonic sensor is mounted rotates at a speed of approximate-ly 400 counts per second. So, a 360 degree rotation of the ultrasonic sensor and detection of distances after every 6 degrees takes 30 seconds, on average. 2.5. Robot Used

The robot used for the purposes of experimenta-tion is based on the Lego Mindstorms NXT TriBot,

sors were used to make the resulting map as accu-rate as possible. The amount of computation and complexity increases with multiple sensors, but with the use of a single ultrasonic sensor amount of com-putation is low. In this work frontier-based explora-tion strategy has been employed without using the occupancy grids and only the readings obtained at the current position of the robot are used for decid-ing the next frontier for the robot.

In other works of map construction and localiza-tion, Simultaneous Localization and Mapping (SLAM) has been used to localize the robot’s position as it moves within the environment, while mapping is being performed [3, 4, 8]. In the current system, each robot position is relative to the origin position of the map and the robot has no knowledge of a prior map. It is also possible to implement map construction with a swarm of robots in the same environment [8] to speed up the process but the implementation has been limited to a single robot so as to ensure that a single robot is able to perform the task of map creation efficiently. It is also easier for the computer to keep track of a single robot.

2. Background 2.1. Environment

The environment considered for the implementa-tion is partially observable, with a single robot work-ing in a continuous state space. The actual environ-ment is an empty room created by temporary parti-tions that is available specifically to the robot and has no human interference, for the purposes of the given experiments. 2.2. Assumptions Taken — The walls of the boundary of room and the ob-

jects within the room are immovable. — Friction between the floor of room and the

wheels of robot is just enough so that, during the execution of a turn at a position, the stationary wheel of the robot does not slip.

— For map construction, the TriBot’s first location is assumed to be (0, 0) with a 0 degree orientation.

2.3. Software MATLAB is used for the purposes of moving the

robot, taking readings of the local surrounding envi-ronment, performing calculations for map creation and localization, and displaying the created global map.

The RWTH – Mindstorms NXT Toolbox [9], ver-sion 4.04 for MATLAB, establishes a connection from the program, running on the computer, to the TriBot. With the use of this toolbox it possible to control all the motors and sensors attached to the TriBot. Hence, it is possible to move the robot by controlling its wheeled motors, use the ultrasonic sensor to detect distances from objects, and also rotate the motor on which the ultrasonic sensor is mounted, through the MATLAB program.

For controlling of the wheeled motors and de-termining the robot’s movements, it is important to find the number of degrees the motors must rotate to be able to travel a certain distance. The diameter

of the wheels used is 5.5 cm and hence for a 360 degree revolution of the wheels the robot moves forward by 5.5*π centimeters. The degree of revolution, for the wheeled motors, is calculated according to the distance in centimeters that the robot is required to move.

To use the ultrasonic sensor it is necessary to first establish a serial connection to the NXT brick and to its port on which the ultrasonic sensor is at-tached. By using this port, the distance readings recorded by the sensor are returned to the MATLAB program via the USB or Bluetooth connection. 2.4. Sensor and Motors

The ultrasonic sensor used with the TriBot measures the time-of-flight of a burst of high fre-quency sound wave. The value returned by the sen-sor is a single number representing the straight line distance of an object from the sensor. These values are in centimeters, and are read by the MATLAB program. The ultrasonic sensor is able to detect dis-tances ranging from 4 centimeters to 2.3 meters with an average precision of 3 centimeters and returns better readings when the object to be detected has a hard surface. If a wave hits the surface at a wide angle from the surface normal [5] it may not return to the sensor, in which case a value of 255 is received from the sensor, indicating an error. The wave may also bounce off a separate surface and create a ghost image.

The interactive servo motors of the TriBot are equipped with an integrated encoder which records a count value of 360 for a single circular revolution i.e., when the motor rotates by 360 degrees, a count of 360 is recoded. This encoder has an accuracy of 1 degree. The motor is able to return this count to the program via the serial connection. These motors have a default maximum speed of 1000 encoder counts per second which means the robot can travel in a straight line at approximately 48 centimeters per second. For the purpose of experimentation, the wheeled motors have a fixed speed of 800 encoder counts per second (approximately 38.4 centime-ters/second) and maintain a constant torque during motion. The program can also set the tacho limit of the motor, which specifies the integer number of encoder counts (or degrees) that a motor executes before coming to an abrupt stop when this limit is reached. In order for the NXT to move one centime-ter in the forward direction, both the motors must move in the same direction by a tacho limit of ap-proximately 21 encoder counts. The two wheels are 11 centimeters apart and so to turn the robot by 1 degree, while moving one wheel and keeping the other wheel stationery, the motor must move by a tacho limit of 4. The motor on which the ultrasonic sensor is mounted rotates at a speed of approximate-ly 400 counts per second. So, a 360 degree rotation of the ultrasonic sensor and detection of distances after every 6 degrees takes 30 seconds, on average. 2.5. Robot Used

The robot used for the purposes of experimenta-tion is based on the Lego Mindstorms NXT TriBot,

Page 25: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles24

with a few modifications. The TriBot is equipped with one ultrasonic sensor, mounted on an interac-tive motor. It is possible to create a ring of 60 sensor readings in one complete circular turn by rotating the sensor anticlockwise by 360 degrees and taking readings after every 6 degree interval. The ultrasonic sensor has been placed at the origin of the robot reference frame [2].

Fig. 1. Top view of the Lego Mindstorms NXT TriBot based robot with an interactive servo motor and the ultrasonic sensor attached on top

Fig. 2. Front view of the Lego Mindstorms NXT TriBot based robot with the ultrasonic sensor pointing for-ward

The central component of the TriBot is the NXT

brick which is essential for establishing a connection with the computer and controlling the sensors, and motors. Figure 1 shows the TriBot used for experi-mentation purposes.

Figure 2 shows the front view of the TriBot used. The wheeled motors, on either side of the robot, are synchronized so that if one wheel moves forward the other also moves forward by the same degree and at the same speed. Hence both the wheels move in unison and it is not necessary to control the power and tacho limit for each motor separately. This is important when the robot moves forward or back-ward to ensure that it follows a straight line and does not drive in a curve. It is also possible to control a single wheeled motor separately when the robot executes a turn.

Figure 3 shows the schematic diagram of the TriBot. The ultrasonic sensor is represented by a yellow triangle, and the orange circle in the center of the robot represents the motor on which the sen-sor is mounted. The intersecting perpendicular lines within the robot represent the robot reference frame. Origin of the robot reference frame is at the point where these lines intersect. The two wheels on either side of the robot and a swiveling wheel are also shown in the figure using round edged rectan-gles.

Fig. 3. Schematic representation of the Lego Mindstorms NXT TriBot based robot depicting the robot reference frame

3. Map Construction and Localization

Map construction process involves recording and displaying the robot’s surrounding environment. It is implemented as a continuous cycle of two steps namely, sensing and motion, as shown in Figure 4.

Fig. 4. Map construction process used by the TriBot in implementation

with a few modifications. The TriBot is equipped with one ultrasonic sensor, mounted on an interac-tive motor. It is possible to create a ring of 60 sensor readings in one complete circular turn by rotating the sensor anticlockwise by 360 degrees and taking readings after every 6 degree interval. The ultrasonic sensor has been placed at the origin of the robot reference frame [2].

Fig. 1. Top view of the Lego Mindstorms NXT TriBot based robot with an interactive servo motor and the ultrasonic sensor attached on top

Fig. 2. Front view of the Lego Mindstorms NXT TriBot based robot with the ultrasonic sensor pointing for-ward

The central component of the TriBot is the NXT

brick which is essential for establishing a connection with the computer and controlling the sensors, and motors. Figure 1 shows the TriBot used for experi-mentation purposes.

Figure 2 shows the front view of the TriBot used. The wheeled motors, on either side of the robot, are synchronized so that if one wheel moves forward the other also moves forward by the same degree and at the same speed. Hence both the wheels move in unison and it is not necessary to control the power and tacho limit for each motor separately. This is important when the robot moves forward or back-ward to ensure that it follows a straight line and does not drive in a curve. It is also possible to control a single wheeled motor separately when the robot executes a turn.

Figure 3 shows the schematic diagram of the TriBot. The ultrasonic sensor is represented by a yellow triangle, and the orange circle in the center of the robot represents the motor on which the sen-sor is mounted. The intersecting perpendicular lines within the robot represent the robot reference frame. Origin of the robot reference frame is at the point where these lines intersect. The two wheels on either side of the robot and a swiveling wheel are also shown in the figure using round edged rectan-gles.

Fig. 3. Schematic representation of the Lego Mindstorms NXT TriBot based robot depicting the robot reference frame

3. Map Construction and Localization

Map construction process involves recording and displaying the robot’s surrounding environment. It is implemented as a continuous cycle of two steps namely, sensing and motion, as shown in Figure 4.

Fig. 4. Map construction process used by the TriBot in implementation

with a few modifications. The TriBot is equipped with one ultrasonic sensor, mounted on an interac-tive motor. It is possible to create a ring of 60 sensor readings in one complete circular turn by rotating the sensor anticlockwise by 360 degrees and taking readings after every 6 degree interval. The ultrasonic sensor has been placed at the origin of the robot reference frame [2].

Fig. 1. Top view of the Lego Mindstorms NXT TriBot based robot with an interactive servo motor and the ultrasonic sensor attached on top

Fig. 2. Front view of the Lego Mindstorms NXT TriBot based robot with the ultrasonic sensor pointing for-ward

The central component of the TriBot is the NXT

brick which is essential for establishing a connection with the computer and controlling the sensors, and motors. Figure 1 shows the TriBot used for experi-mentation purposes.

Figure 2 shows the front view of the TriBot used. The wheeled motors, on either side of the robot, are synchronized so that if one wheel moves forward the other also moves forward by the same degree and at the same speed. Hence both the wheels move in unison and it is not necessary to control the power and tacho limit for each motor separately. This is important when the robot moves forward or back-ward to ensure that it follows a straight line and does not drive in a curve. It is also possible to control a single wheeled motor separately when the robot executes a turn.

Figure 3 shows the schematic diagram of the TriBot. The ultrasonic sensor is represented by a yellow triangle, and the orange circle in the center of the robot represents the motor on which the sen-sor is mounted. The intersecting perpendicular lines within the robot represent the robot reference frame. Origin of the robot reference frame is at the point where these lines intersect. The two wheels on either side of the robot and a swiveling wheel are also shown in the figure using round edged rectan-gles.

Fig. 3. Schematic representation of the Lego Mindstorms NXT TriBot based robot depicting the robot reference frame

3. Map Construction and Localization

Map construction process involves recording and displaying the robot’s surrounding environment. It is implemented as a continuous cycle of two steps namely, sensing and motion, as shown in Figure 4.

Fig. 4. Map construction process used by the TriBot in implementation

Page 26: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 25

The robot is able to navigate around in an area and record the features of that space. It initially starts at the origin position around which the global map is constructed. In this system, the origin of the robot reference frame is initially assumed to coincide with the origin of the map and the Cartesian coordinates of the robot in the map are (0, 0) with an orientation of 0 degree with respect to the X-axis. A position of the robot in the map is the single set of values, of its X and Y coordinates and its orientation, with respect to the origin of the map. These positions are recorded as distinct rows in a matrix in the MATLAB program; where origin of the map is saved as the first position of the robot. The ultrasonic sensor mounted on the robot is initially assumed to be pointing in the 0 de-gree direction with respect to the X-axis of the map. 3.1. Sensing

In this step the robot senses its current local sur-roundings by using the ultrasonic sensor. The TriBot takes a set of 120 ultrasonic readings of the area around its current position by rotating the motor by a 6 degree interval in the counter-clockwise direction, for a total of 720 degrees, instead of a revolution of 360 degrees, which decreases the probability that the recorded readings are representative of a ghost im-age. The values thus obtained are relative to the origin of the robot reference frame and denote the robot’s immediate neighborhood.

Angles, at which these readings are recorded, are also saved along with the distance values which depict the recorded straight line distance between the robot and an object at that angle. The motor is rotated till the sensor points at an angle of 714 (or 354) degrees with respect to the positive X-axis. These 120 read-ings, saved in a two-column matrix, represent the local map of the robot at that position and the col-umns in the matrix represent the radius and angle of the Polar coordinates of the objects from the current position of the robot.

After obtaining a set of readings, for each distance reading of 255 the value is deleted and the corre-sponding angle of that reading is also deleted, so that these readings are not considered for map construc-tion. The sensor is then rotated back to the 0 degree position, as it was before the readings were taken. To display the map, Polar coordinates of the objects tak-en with respect to the robot reference frame are con-verted into Cartesian coordinates. The map is initially empty and the set of readings taken at the origin of the global map, are then added to the map.

For all the next sets of readings representing the TriBot’s local map, at some known position relative to the origin of the map, the calculated Cartesian coordi-nates of the readings are shifted by the X and Y values of the TriBot’s current position before being added to the existing map. The robot’s current position, with respect to the global origin, is also plotted onto the map. This process continues till the robot keeps ex-ploring the room, and the then existing map is treated as the final map of the room.

The robot stops at each new position and records the local map readings of that position, a process which takes 60 seconds to obtain 120 readings. The

error in the readings obtained is dependent on the ultrasonic sensor and so an average error of 3 centi-meters is possible in the readings, which is a fairly acceptable value for this kind of sensor. 3.2. Motion

After sensing the environment surrounding the robot, the distance in centimeters and the direction of the next frontier from the robot’s current position is computed and the robot then proceeds to move toward this new frontier location.

The robot moves to a new frontier location so that it is possible to detect the objects in the previ-ously unexplored area of the room. To ensure that the TriBot is able to explore the complete room effi-ciently, the 120 distance readings obtained for the current position of the robot are sorted and one of the four farthest values of the recorded distances is randomly selected. Random selection is chosen, as it is possible that the distances recorded are erroneous readings due to a ghost image.

The TriBot turns according to the angle of the di-rection of the farthest point selected with respect to the robot reference frame and it is then moved half the distance to that point. The distance is halved to counteract the possibility that the point selected represents a ghost image. Every new position of the robot depends only on the relative displacement from the previous position which in turn depends on the readings taken on that position. The robot’s rela-tive displacement from the previous position to the new position is stored in a matrix. The relative orien-tation from previous position is the angle which the robot turned, found during the new frontier location calculation. The X and Y coordinates representing the new position achieved by displacement from previous position are found by converting the dis-tance moved and the angle rotated with respect to the previous position into Cartesian values. The cur-rent position of the robot relative to the origin of the map is calculated by taking the arithmetic sum of all previous relative displacements and adding it to the displacement from last known position to the cur-rent position. Hence the robot’s position is localized by calculating the sum of all the displacements.

The sensing and motion steps are repeated con-tinuously, many times, to construct the final map by the robot through exploration of the environment and to find the position of the robot after every rela-tive displacement. The readings taken at each posi-tion contribute to the calculation of the next frontier for the robot and the robot has no knowledge of the readings taken previously. The robot also has orien-tation information relative to the X-axis of the global map, though it may have approximately 0.25 degrees of error from the previous position of the robot.

All the Cartesian coordinates of the objects ob-served and the robot positions are stored in separate matrices which are later stored as files, so that it becomes possible to recreate the map after the robot has finished collecting the readings from the envi-ronment. A single reading, consisting of X and Y co-ordinates values of the room takes roughly 16 bytes of file space. So, for 120 readings taken at 100 posi-

Page 27: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles26

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

tions, for a total of 12000 records, the file size will be approximately 188 kBytes. 4. Kidnapped Robot Problem

The algorithm implemented and discussed here is used by the TriBot to find its approximate location in an existing map by taking the readings of its surroundings. During the map construction process, the robot finds its position through the relative displacements from the origin of the map. In a situation where the robot only knows the coordinates of the objects of the map and does not know its position relative to the origin, the robot’s position is localized within the environment by taking measurements from its immediate surroundings and by moving it within the area for which the global map is available. Figure 5 depicts these two steps.

Fig. 5. Localization process used by the TriBot

The position of the particles which represent the guesses of the robot’s possible position within the map, are taken as the initial belief for the particles. The particles then update their belief of the robot’s position by using the measurements taken by the robot’s ultrasonic sensor which leads to survival of only a few of the particles with probable initial be-liefs. This selection of the particles with probable beliefs is repeated continuously, by moving the robot and the particles by some distance and taking subse-quent measurements, till all the particles are clus-tered around a single position within the map which can safely predict the robot’s approximate location. 4.1. Particle Filters

Particle filters are used in this work as they are easier to program for continuous value problems, as compared to Kalman Filters [10, 11]. Particle filters also work better than Histogram Filters [11] in high-er-dimensional spaces. Table 1 compares the Histo-gram, Kalman, and Particle filters on the basis of two characteristics. Table 1. Comparisons between Histogram, Kalman and Particle Filters

Filter Name State-Space Belief

Histogram Discrete Multi-Modal Kalman Continuous Uni-Modal Particle Continuous Multi-Modal

4.2. Localization using Particle Filters After satisfactory map construction of the room, the

robot is kidnapped i.e., removed from the environment and then placed randomly somewhere inside the area represented by the global map. The current position of the robot within the map is unknown and is deter-mined by finding distances to nearby objects with the use of the ultrasonic sensor, and calculating a good posterior distribution of where the robot is based on those readings.

Each particle represents the possible position and orientation of the robot in the map. Set of hun-dreds of such (N) particles together comprise an approximate representation of the posterior distri-bution of the robot [11]. In the beginning, all the N particles are uniformly scattered, and the filter al-lows them to survive in proportion to how consistent these particles are with the sensor readings, such that the particles which are more consistent with the sensor measurements are considered fitter and are more likely to survive. Measurements of the robot’s surroundings are recorded and applied to all the particles individually. As a result, closer the particle is to the correct position, more likely the possibility that the measurements belong to that particle, and higher the likelihood of its survival. The particles which are highly consistent with the measurements form clusters around a single position in the map which approximately represents the position of ro-bot as it localizes itself within the map.

Each particle has its own importance weight and survives at random, though the probability of its sur-vival is proportional to its importance weight. Larger the weight of the particle, more likely it is that it repre-sents the robot’s position and more likely that it sur-vives. After resampling i.e., randomly drawing N new particles from the old ones in proportion with their importance weights, with replacement, the smaller weighted particles representing less probable robot positions are likely to be discarded. 4.3. Noise

Three types of noise values [11] are considered dur-ing the localization of the robot, as follows: — Forward noise – The error value, in centimeters,

that must be taken into account when the robot moves in the forward direction.

— Turn noise – The error value, in degrees, which must be taken into consideration when the robot executes a right or left turn on its current position.

— Sense noise – The error value, in centimeters, that must be taken into account due to the ultrasonic sensor’s inability to measure exact distances.

4.4. Particles To create the particles a class is created in

MATLAB, where an object of that class represents a single particle. Each particle created using this class has six attributes, namely – X-coordinate value, Y-coordinate value, Orientation, Forward noise, Turn noise, Sense noise. The position and orientation of a particle are randomly initialized on creation of the particle. The forward, turn and sense noise values for each particle are initially set to 0.5, by default.

Page 28: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 27

4.5. Particle Filter Algorithm

Firstly, 250 particles are created using the class implemented in MATLAB and their noise attribute values are set according to the measured values of the precision for TriBot’s wheeled motors and ultra-sonic sensor. The values of forward, turn, and sense noise of the TriBot are calculated using the accuracy of the ultrasonic sensor and the motors and are set to 0.05 centimeters, 0.25 degrees and 3 centimeters respectively for the current implementation.

At the current unknown position of the robot a reading is taken by the ultrasonic sensor at a certain angle, between 0 and 45 degrees, with respect to the robot reference frame. Seven more such readings are taken at an interval of 45 degree each, which gives a set of total eight measurements for the robot’s current position. For each particle, these angles are used to find the distances between the particle’s position and the objects in the map at these angles, with respect to the particle’s orientation. The probability that a parti-cle represents the approximate location of the robot is computed by using the equation:

= ()

√∝

(1)

Where µ is the distance of the particle from an ob-

ject, x is the observed distance measurement at that angle with respect to the robot reference frame, σ is the sense noise and α is the set of all angles at which the eight readings were recorded. This probability, p, is eventually treated as the importance weight for that particle.

The particle filter implemented in the current sys-tem, is shown in Algorithm 1. RoomX and RoomY are arrays having the values of X and Y coordinates of the room boundary and objects, recorded during map con-struction. Line 2 creates a set of particles P by creating objects of the class and initializes the respective attrib-ute values for each particle. P is a collection of one-dimensional arrays, viz. Px, Py, Porientation, Psense, Pturn and Pforward, of size N each, where each array repre-sents an attribute. Array Px records the X-coordinate value for all the N particles in sequence, Py stores the Y-coordinate value for all the particles and so on. W is an array of size N that sequentially stores the importance weights of all the particles in P. Line 3 sets the noise values of all the particles.

The constant, t is used to define the number of times that the algorithm must be run and a value of 10 is fixed in the implementation. The initially empty ar-ray, select, is used to save the index of the coordinates of room that satisfy the measurement reading, Z, when applied to a particle. Integer value of the variable angle can be initialized manually between 0 and π/4 radians and a value of π/6 radians has been chosen in the im-plementation.

In line 7, the robot takes a distance reading of its surrounding at the angle specified. The function polar takes the arrays of Cartesian coordinates of the room and converts them to Polar coordinates, with respect to

the origin of the global map; where R represents the array of distances and Ɵ represents the array of angles of those points, in radians. The size function returns the number of rows in an array. The values distX, distY represent the distance between the X and Y coordinates of the particles from the coordinates of the object of the room observed at a certain angle by the robot. These values are used to find the distance of the particle from the coordinates in map, which is then used in line 29 and 30 to find the importance weight of the particle. Line 33 applies the resampling algorithm which ran-domly draws N new particles from the old ones, with replacement, according to their importance weights. In line 37 implements the movement of the TriBot and all the particles, by turning them by 20 degrees anti-clockwise and moving them forward by a distance of 6 centimeters. A random Gaussian distribution value with zero mean, and the standard deviation set as the forward or turn noise is added when the particles are moved forward, or rotated. Algorithm 1: Particle Filter Algorithm for Localization. Input :

,

1 begin 2 Create N uniformly distributed particles P, and ini-

tialize attributes Px, Py, Porientation, Psense, Pturn, and Pforward.

3 Set Noise values of Psense, Pturn, Pforward. 4 for t ← 1 to 10 do 5 angle ← π/6 ; 6 for i ← 1 to 8 do 7 Z ← Distance reading with TriBot(angle) ; 8 select ← [ ] ; 9 (R, Ɵ) ← polar (RoomX, RoomY); 10 for j ← 1 to N do 11 W(j) ← 1.0 ; 12 for k ← 1 to N do 13 dcos ← R(k) * cos(Ɵ(k)) – Px(j) ; 14 dsin ← R(k) * sin(Ɵ(k)) – Py(j) ; 15 d ← √ + ; 16 if dcos ≅ 0 do 17 r ← ( ⁄ ) ; 18 else 19 r ← ( ⁄ ) ; 20 end if (line 16) 21 if = (() + )

do 22 Add k to select array. 23 end if (line 21) 24 end for (line 12) 25 for i ← 1 to size(select) do 26 distX ← () () ;

Page 29: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles28

27 distY ← () () ;

28 dist ← √ + ;

29 ← ()

( ∗(()))

∗(()) ;

30 W(j) ← W(j) * p ; 31 end for (line 25) 32 end for (line 10) 33 P2 ← Resampling Wheel Algorithm (W, N). 34 P ← P2 ; 35 angle ← angle + π/4 ; 36 end for (line 6) 37 Move the robot and particles. 38 end for (line 4) 39 end

For resampling the particles on the basis of their

importance weights, a particle that has higher im-portance weight, must be sampled or picked more number of times than a particle that has low im-portance weight. This is achieved through the resampling wheel algorithm [11] described in Algo-rithm 2. First, an initial integer guess of a particle num-ber is taken from the uniform interval (1, N) at random and labeled as index. A value beta, initialized as zero, is added to a real value drawn uniformly from the interval (0, 2*maxw) to give a new value of beta; where maxw is the maximum importance weight in W. If weight of the index particle is smaller than the current value of beta, then the weight of the particle is subtracted from beta and index is incremented by one. Otherwise, if the beta value is lower than weight of index particle, then that particle is chosen to survive.

In the given algorithm, Ʋ1 represents a function that returns a uniformly integer drawn value and Ʋ2 re-turns a uniform real value from the intervals specified. The max function returns the maximum value among all the elements of an array. P2 is, initially, an empty set of arrays and stores the attribute values of the particles selected for survival after resampling. Algorithm 2. Resampling Wheel Algorithm. Input :

∶ ∶

Output : ∶

1 begin 2 index ← Ʋ1(1, N) ; 3 beta ← 0 ; 4 P2 ← [ ] ; 5 maxw ← max(W); 6 for i ← 1 to N do 7 beta ← beta + Ʋ2(0, 2*maxw) ; 8 while W[index] < beta do 9 beta ← beta – W(index) ; 10 index ← index + 1 ;

11 end while 12 Add the index particle to P2. 13 end for 14 end

If the beta value is small, there is a high possibil-

ity of a particle with large importance weight being picked more than once during resampling. After resampling, N new particles are obtained, each having a new position and orientation values that have been derived from old particles with high im-portance weights.

At the end of the process an approximate location of the robot is found by analyzing the X and Y coor-dinate of the locations where most of the particles are clustered in the global map. 5. Experimental Results

Figure 6 shows the area under consideration for the purposes of experimentation. The cross mark shows the robot’s starting position for map construc-tion. The light-gray colored portion in the figure is beyond the environment under observation.

Fig. 6. Room used for Map Construction and Localiza-tion by the TriBot

5.1. Map Construction

Figure 7 shows the two dimensional maps created after taking readings on the first two positions. All the objects recorded by the TriBot are displayed as a point cloud of black dots as the ultrasonic sensor returns a single value of the straight line distance of an object from the ultrasonic sensor. In the figure the positions of the robot where the readings were taken are displayed by magenta colored dots. Figure 7a shows the local map created at the origin position by the TriBot during map construction. Figure 7b shows the existing map after adding the readings taken at the second position, to the map created in Figure 7a.

27 distY ← () () ;

28 dist ← √ + ;

29 ← ()

( ∗(()))

∗(()) ;

30 W(j) ← W(j) * p ; 31 end for (line 25) 32 end for (line 10) 33 P2 ← Resampling Wheel Algorithm (W, N). 34 P ← P2 ; 35 angle ← angle + π/4 ; 36 end for (line 6) 37 Move the robot and particles. 38 end for (line 4) 39 end

For resampling the particles on the basis of their

importance weights, a particle that has higher im-portance weight, must be sampled or picked more number of times than a particle that has low im-portance weight. This is achieved through the resampling wheel algorithm [11] described in Algo-rithm 2. First, an initial integer guess of a particle num-ber is taken from the uniform interval (1, N) at random and labeled as index. A value beta, initialized as zero, is added to a real value drawn uniformly from the interval (0, 2*maxw) to give a new value of beta; where maxw is the maximum importance weight in W. If weight of the index particle is smaller than the current value of beta, then the weight of the particle is subtracted from beta and index is incremented by one. Otherwise, if the beta value is lower than weight of index particle, then that particle is chosen to survive.

In the given algorithm, Ʋ1 represents a function that returns a uniformly integer drawn value and Ʋ2 re-turns a uniform real value from the intervals specified. The max function returns the maximum value among all the elements of an array. P2 is, initially, an empty set of arrays and stores the attribute values of the particles selected for survival after resampling. Algorithm 2. Resampling Wheel Algorithm. Input :

∶ ∶

Output : ∶

1 begin 2 index ← Ʋ1(1, N) ; 3 beta ← 0 ; 4 P2 ← [ ] ; 5 maxw ← max(W); 6 for i ← 1 to N do 7 beta ← beta + Ʋ2(0, 2*maxw) ; 8 while W[index] < beta do 9 beta ← beta – W(index) ; 10 index ← index + 1 ;

11 end while 12 Add the index particle to P2. 13 end for 14 end

If the beta value is small, there is a high possibil-

ity of a particle with large importance weight being picked more than once during resampling. After resampling, N new particles are obtained, each having a new position and orientation values that have been derived from old particles with high im-portance weights.

At the end of the process an approximate location of the robot is found by analyzing the X and Y coor-dinate of the locations where most of the particles are clustered in the global map. 5. Experimental Results

Figure 6 shows the area under consideration for the purposes of experimentation. The cross mark shows the robot’s starting position for map construc-tion. The light-gray colored portion in the figure is beyond the environment under observation.

Fig. 6. Room used for Map Construction and Localiza-tion by the TriBot

5.1. Map Construction

Figure 7 shows the two dimensional maps created after taking readings on the first two positions. All the objects recorded by the TriBot are displayed as a point cloud of black dots as the ultrasonic sensor returns a single value of the straight line distance of an object from the ultrasonic sensor. In the figure the positions of the robot where the readings were taken are displayed by magenta colored dots. Figure 7a shows the local map created at the origin position by the TriBot during map construction. Figure 7b shows the existing map after adding the readings taken at the second position, to the map created in Figure 7a.

27 distY ← () () ;

28 dist ← √ + ;

29 ← ()

( ∗(()))

∗(()) ;

30 W(j) ← W(j) * p ; 31 end for (line 25) 32 end for (line 10) 33 P2 ← Resampling Wheel Algorithm (W, N). 34 P ← P2 ; 35 angle ← angle + π/4 ; 36 end for (line 6) 37 Move the robot and particles. 38 end for (line 4) 39 end

For resampling the particles on the basis of their

importance weights, a particle that has higher im-portance weight, must be sampled or picked more number of times than a particle that has low im-portance weight. This is achieved through the resampling wheel algorithm [11] described in Algo-rithm 2. First, an initial integer guess of a particle num-ber is taken from the uniform interval (1, N) at random and labeled as index. A value beta, initialized as zero, is added to a real value drawn uniformly from the interval (0, 2*maxw) to give a new value of beta; where maxw is the maximum importance weight in W. If weight of the index particle is smaller than the current value of beta, then the weight of the particle is subtracted from beta and index is incremented by one. Otherwise, if the beta value is lower than weight of index particle, then that particle is chosen to survive.

In the given algorithm, Ʋ1 represents a function that returns a uniformly integer drawn value and Ʋ2 re-turns a uniform real value from the intervals specified. The max function returns the maximum value among all the elements of an array. P2 is, initially, an empty set of arrays and stores the attribute values of the particles selected for survival after resampling. Algorithm 2. Resampling Wheel Algorithm. Input :

∶ ∶

Output : ∶

1 begin 2 index ← Ʋ1(1, N) ; 3 beta ← 0 ; 4 P2 ← [ ] ; 5 maxw ← max(W); 6 for i ← 1 to N do 7 beta ← beta + Ʋ2(0, 2*maxw) ; 8 while W[index] < beta do 9 beta ← beta – W(index) ; 10 index ← index + 1 ;

11 end while 12 Add the index particle to P2. 13 end for 14 end

If the beta value is small, there is a high possibil-

ity of a particle with large importance weight being picked more than once during resampling. After resampling, N new particles are obtained, each having a new position and orientation values that have been derived from old particles with high im-portance weights.

At the end of the process an approximate location of the robot is found by analyzing the X and Y coor-dinate of the locations where most of the particles are clustered in the global map. 5. Experimental Results

Figure 6 shows the area under consideration for the purposes of experimentation. The cross mark shows the robot’s starting position for map construc-tion. The light-gray colored portion in the figure is beyond the environment under observation.

Fig. 6. Room used for Map Construction and Localiza-tion by the TriBot

5.1. Map Construction

Figure 7 shows the two dimensional maps created after taking readings on the first two positions. All the objects recorded by the TriBot are displayed as a point cloud of black dots as the ultrasonic sensor returns a single value of the straight line distance of an object from the ultrasonic sensor. In the figure the positions of the robot where the readings were taken are displayed by magenta colored dots. Figure 7a shows the local map created at the origin position by the TriBot during map construction. Figure 7b shows the existing map after adding the readings taken at the second position, to the map created in Figure 7a.

Page 30: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 29

Figure 8 shows the map created after one scanning of the room. In the map constructed, the complete room is scanned only once and for better results of map construction and localization, it is possible to let the TriBot scan the room continuously till a highly detailed map is constructed. Noisy readings, representing ghost images, can be seen in the map which lie either in the region beyond the room boundaries or within the emp-ty area of the original room shown in Figure 6.

(a)

(b)

Fig. 7. (a) Map obtained after taking readings at the origin position. (b) Map obtained after taking readings at second position and adding them to the existing map

Fig. 8. Map constructed by the TriBot after one scanning of the room where the boundaries of the room can be easily identified

Figure 9 shows the map constructed in Figure 8, super-imposed onto the room shown in Figure 6.

Fig. 9. Image of the room under consideration and the map created by the TriBot superimposed onto it.

5.2. Kidnapped Robot Problem

Figure 10 shows the particles initially distributed, almost uniformly. The green triangles represent the particles created initially. The blue asterisks represent the boundary of the room created during map construction.

Fig. 10. Particles created for localization of the robot and to solve the kidnapped robot problem

Fig. 11. Particles finally localized to the robot’s approxi-mate position within the map

Figure 8 shows the map created after one scanning of the room. In the map constructed, the complete room is scanned only once and for better results of map construction and localization, it is possible to let the TriBot scan the room continuously till a highly detailed map is constructed. Noisy readings, representing ghost images, can be seen in the map which lie either in the region beyond the room boundaries or within the emp-ty area of the original room shown in Figure 6.

(a)

(b)

Fig. 7. (a) Map obtained after taking readings at the origin position. (b) Map obtained after taking readings at second position and adding them to the existing map

Fig. 8. Map constructed by the TriBot after one scanning of the room where the boundaries of the room can be easily identified

Figure 9 shows the map constructed in Figure 8, super-imposed onto the room shown in Figure 6.

Fig. 9. Image of the room under consideration and the map created by the TriBot superimposed onto it.

5.2. Kidnapped Robot Problem

Figure 10 shows the particles initially distributed, almost uniformly. The green triangles represent the particles created initially. The blue asterisks represent the boundary of the room created during map construction.

Fig. 10. Particles created for localization of the robot and to solve the kidnapped robot problem

Fig. 11. Particles finally localized to the robot’s approxi-mate position within the map Figure 8 shows the map created

after one scanning of the room. In the map constructed, the complete room is scanned only once and for better results of map construction and localization, it is possible to let the TriBot scan the room continuously till a highly detailed map is construct-ed. Noisy readings, representing ghost images, can be seen in the map which lie either in the region beyond the room

boundaries or within the empty area of the original room shown in Figure 6.

Figure 9 shows the map con-structed in Figure 8, superim-posed onto the room shown in Figure 6.

5.2. Kidnapped Robot Problem

Figure 10 shows the parti-cles initially distributed, almost uniformly. The green triangles represent the particles created initially. The blue asterisks rep-resent the boundary of the room created during map construction.

Figure 11 shows how the green colored particles have clustered on the right hand side of the map and predict the approximate position of the TriBot.

In later iterations of the algorithm, the particles move within the map as the robot moves in the actual environ-ment. Figure 12 shows how the

particles after executing a left turn and moving forwards, and further upwards, in the map as the robot moves.

Fig. 8. Map constructed by the TriBot after one scanning of the room where the boundaries of the room can be easily identified

Figure 8 shows the map created after one scanning of the room. In the map constructed, the complete room is scanned only once and for better results of map construction and localization, it is possible to let the TriBot scan the room continuously till a highly detailed map is constructed. Noisy readings, representing ghost images, can be seen in the map which lie either in the region beyond the room boundaries or within the emp-ty area of the original room shown in Figure 6.

(a)

(b)

Fig. 7. (a) Map obtained after taking readings at the origin position. (b) Map obtained after taking readings at second position and adding them to the existing map

Fig. 8. Map constructed by the TriBot after one scanning of the room where the boundaries of the room can be easily identified

Figure 9 shows the map constructed in Figure 8, super-imposed onto the room shown in Figure 6.

Fig. 9. Image of the room under consideration and the map created by the TriBot superimposed onto it.

5.2. Kidnapped Robot Problem

Figure 10 shows the particles initially distributed, almost uniformly. The green triangles represent the particles created initially. The blue asterisks represent the boundary of the room created during map construction.

Fig. 10. Particles created for localization of the robot and to solve the kidnapped robot problem

Fig. 11. Particles finally localized to the robot’s approxi-mate position within the map

Fig. 10. Particles created for localization of the robot and to solve the kidnapped robot problem

Fig. 11. Particles finally localized to the robot’s approxi-mate position within the map

Fig. 7. (a) Map obtained after taking readings at the origin position. (b) Map obtained after taking readings at second position and adding them to the existing map

Fig. 9. Image of the room under consider-ation and the map created by the TriBot superimposed onto it

Page 31: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles30

6. Conclusion

Efforts have been made in this paper to create a map of an environment using Lego Mindstorms NXT TriBot. This work may find applications in creating maps of unknown environments that are inaccessible for humans, by deploying robots into them or in ap-plications of home security systems to detect anoma-lies when the map changes in the absence of human interference. The MATLAB programming environment with RWTH – Mindstorms NXT Toolbox has been used for the purpose of implementation and testing of the system with the TriBot. The implemented system has been successfully tested with an indoor map creation of a room and for subsequently solving the kidnapped robot problem with particle filter localization of the TriBot within that room. The robot in this work uses a single ultrasonic sensor, making it highly cost ef-fective.

A better representation of the map can be created by making the environment multi-agent with the use of several similar Lego Mindstorms NXT TriBots that are controlled via MATLAB and by overlapping the individual maps created by them all. It is also pos-sible to reduce the number of particles used in the implementation, and achieve almost similar results as the ones shown in this paper.

AUTHORSJasmeet Singh*, Punam Bedi – Department of Com-puter Science, Faculty of Mathematical Sciences, New Academic Block, University of Delhi, Delhi- 110007, India, [email protected], [email protected]

*Corresponding author

References[1] Burguera A., González Y.’ Oliver G., “Mobile Ro-

bot Localization Using Particle Filters and Sonar Sensors”, Advances in Sonar Technology, In-Tech: Vienna, Austria, 2009, Chapter 10, pp. 213–232.

[2] Adiprawita W., Ahmad A. S., Sembiring J., Trilak-sono, B. R., “New Resampling Algorithm for Par-ticle Filter Localization for Mobile Robot with 3 Ultrasonic Sonar Sensors”, In: Proceedings of International Conference on Electrical Engineering and Informatics, Bandung, Indonesia, July 17–19, 2011, pp. 1431–1436.

[3] Burguera A., González Y., Oliver G., “Sonar Sensor Models and Their Application to Mobile Robot Localization”, Sensors, vol. 9, 2009, pp. 10217–10243.

[4] Thrun S., “Particle Filters in Robotics”, In: Proceed-ings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI), Edmonton, Alberta, Canada, August 1–4, 2002, pp. 511–518.

[5] Howell J., Donald B.R., “Practical Mobile Robot Self-Localization”, In Proceedings of IEEE Inter-national Conference on Robotics and Automation (ICRA), San Francisco, CA, USA, April 24–28, 2000, vol. 4, pp. 3485–3492.

[6] Yamauchi B., Schultz A., Adams W., “Mobile Robot Exploration and Map-Building with Continuous Localization”, In Proceedings of IEEE Interna-tional Conference on Robotics and Automation (ICRA), Leuven, Belgium, May 16–20, 1998, vol. 4, pp. 3715–3720.

[7] Varveropoulos V., “Robot Localization and Map Construction Using Sonar Data”, The Rossum Proj-ect: 2000. Available online: http://www.rossum.sourceforge.net/papers/Localization (accessed on 17 January 2012).

[8] Howard A., “Multi-robot Simultaneous Localiza-tion and Mapping using Particle Filters”, Int. J. Robot. Res., vol. 25, 2006, pp. 1243–1256.

[9] RWTH - Mindstorms NXT Toolbox, RWTH Aachen University, 2010. Available online: http://www.mindstorms.rwth-aachen.de/trac/ wiki (accessed on 18 August 2011).

[10] Fox D., Burgardy W., Dellaerta F., Thrun S., “Monte Carlo Localization: Efficient Position Estimation for Mobile Robots”, In: Proceedings of the Sixteenth National Conference on Artificial Intelligence, Orlando, FL, USA, July 18–22, 1999, pp. 343–349.

[11] Artificial Intelligence (CS373) Programming a Robotic Car, Udacity, 2012. Available online: http://www.udacity.com/overview/Course/ cs373 (accessed on 26 February 2012).

Figure 11 shows how the green colored particles have clustered on the right hand side of the map and predict the approximate position of the TriBot.

In later iterations of the algorithm, the particles move within the map as the robot moves in the actual environment. Figure 12 shows how the particles after executing a left turn and moving forwards, and further upwards, in the map as the robot moves.

(a)

(b)

Fig. 12. Particles moving with the robot’s actual move-ment. (a) The cloud of green particles is near the right hand side boundary (b) The particles move upwards and left, further away from the boundary

6. Conclusion

Efforts have been made in this paper to create a map of an environment using Lego Mindstorms NXT TriBot. This work may find applications in creating maps of unknown environments that are inaccessible for humans, by deploying robots into them or in appli-cations of home security systems to detect anomalies when the map changes in the absence of human inter-ference. The MATLAB programming environment with RWTH- Mindstorms NXT Toolbox has been used for the purpose of implementation and testing of the system with the TriBot. The implemented system has been successfully tested with an indoor map creation of a room and for subsequently solving the kidnapped ro-bot problem with particle filter localization of the TriBot within that room. The robot in this work uses a single ultrasonic sensor, making it highly cost effective.

A better representation of the map can be created by making the environment multi-agent with the use of several similar Lego Mindstorms NXT TriBots that are controlled via MATLAB and by overlapping the individ-

ual maps created by them all. It is also possible to re-duce the number of particles used in the implementa-tion, and achieve almost similar results as the ones shown in this paper.

AUTHORS Jasmeet Singh*, Punam Bedi – Department of Computer Science, Faculty of Mathematical Sciences, New Academic Block, University of Delhi, Delhi- 110007, India, [email protected], [email protected] *Corresponding author References [1] Burguera, A.; González, Y.; Oliver, G., “Mobile Ro-

bot Localization Using Particle Filters and Sonar Sensors”, Advances in Sonar Technology, In-Tech: Vienna, Austria, 2009, Chapter 10, pp. 213–232.

[2] Adiprawita, W; Ahmad, A. S.; Sembiring, J.; Trilaksono, B. R., “New Resampling Algorithm for Particle Filter Localization for Mobile Robot with 3 Ultrasonic Sonar Sensors”, In Proceedings of Inter-national Conference on Electrical Engineering and Informatics, Bandung, Indonesia, July 17-19, 2011, pp. 1431–1436.

[3] Burguera, A.; González, Y.; Oliver, G., “Sonar Sensor Models and Their Application to Mobile Robot Lo-calization”, Sensors, vol. 9, 2009, pp. 10217–10243.

[4] S. Thrun, “Particle Filters in Robotics”, In Proceed-ings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI), Edmonton, Alberta, Canada, August 1–4, 2002, pp. 511–518.

[5] Howell, J.; Donald, B. R., “Practical Mobile Robot Self-Localization”, In Proceedings of IEEE Interna-tional Conference on Robotics and Automation (ICRA), San Francisco, CA, USA, April 24–28, 2000, vol. 4, pp. 3485–3492.

[6] Yamauchi, B.; Schultz, A.; Adams, W., “Mobile Ro-bot Exploration and Map-Building with Continu-ous Localization”, In Proceedings of IEEE Interna-tional Conference on Robotics and Automation (ICRA), Leuven, Belgium, May 16-20, 1998, vol. 4, pp. 3715–3720.

[7] V. Varveropoulos, “Robot Localization and Map Construction Using Sonar Data”, The Rossum Pro-ject: 2000. Available online: http://www.rossum.sourceforge.net/papers/ Localization (accessed on 17 January 2012).

[8] A. Howard, “Multi-robot Simultaneous Localiza-tion and Mapping using Particle Filters”, Int. J. Ro-bot. Res., vol. 25, 2006, pp. 1243–1256.

[9] RWTH - Mindstorms NXT Toolbox, RWTH Aachen University, 2010. Available online: http://www.mindstorms.rwth-aachen.de/trac/ wiki (accessed on 18 August 2011).

Figure 11 shows how the green colored particles have clustered on the right hand side of the map and predict the approximate position of the TriBot.

In later iterations of the algorithm, the particles move within the map as the robot moves in the actual environment. Figure 12 shows how the particles after executing a left turn and moving forwards, and further upwards, in the map as the robot moves.

(a)

(b)

Fig. 12. Particles moving with the robot’s actual move-ment. (a) The cloud of green particles is near the right hand side boundary (b) The particles move upwards and left, further away from the boundary

6. Conclusion

Efforts have been made in this paper to create a map of an environment using Lego Mindstorms NXT TriBot. This work may find applications in creating maps of unknown environments that are inaccessible for humans, by deploying robots into them or in appli-cations of home security systems to detect anomalies when the map changes in the absence of human inter-ference. The MATLAB programming environment with RWTH- Mindstorms NXT Toolbox has been used for the purpose of implementation and testing of the system with the TriBot. The implemented system has been successfully tested with an indoor map creation of a room and for subsequently solving the kidnapped ro-bot problem with particle filter localization of the TriBot within that room. The robot in this work uses a single ultrasonic sensor, making it highly cost effective.

A better representation of the map can be created by making the environment multi-agent with the use of several similar Lego Mindstorms NXT TriBots that are controlled via MATLAB and by overlapping the individ-

ual maps created by them all. It is also possible to re-duce the number of particles used in the implementa-tion, and achieve almost similar results as the ones shown in this paper.

AUTHORS Jasmeet Singh*, Punam Bedi – Department of Computer Science, Faculty of Mathematical Sciences, New Academic Block, University of Delhi, Delhi- 110007, India, [email protected], [email protected] *Corresponding author References [1] Burguera, A.; González, Y.; Oliver, G., “Mobile Ro-

bot Localization Using Particle Filters and Sonar Sensors”, Advances in Sonar Technology, In-Tech: Vienna, Austria, 2009, Chapter 10, pp. 213–232.

[2] Adiprawita, W; Ahmad, A. S.; Sembiring, J.; Trilaksono, B. R., “New Resampling Algorithm for Particle Filter Localization for Mobile Robot with 3 Ultrasonic Sonar Sensors”, In Proceedings of Inter-national Conference on Electrical Engineering and Informatics, Bandung, Indonesia, July 17-19, 2011, pp. 1431–1436.

[3] Burguera, A.; González, Y.; Oliver, G., “Sonar Sensor Models and Their Application to Mobile Robot Lo-calization”, Sensors, vol. 9, 2009, pp. 10217–10243.

[4] S. Thrun, “Particle Filters in Robotics”, In Proceed-ings of the 18th Annual Conference on Uncertainty in Artificial Intelligence (UAI), Edmonton, Alberta, Canada, August 1–4, 2002, pp. 511–518.

[5] Howell, J.; Donald, B. R., “Practical Mobile Robot Self-Localization”, In Proceedings of IEEE Interna-tional Conference on Robotics and Automation (ICRA), San Francisco, CA, USA, April 24–28, 2000, vol. 4, pp. 3485–3492.

[6] Yamauchi, B.; Schultz, A.; Adams, W., “Mobile Ro-bot Exploration and Map-Building with Continu-ous Localization”, In Proceedings of IEEE Interna-tional Conference on Robotics and Automation (ICRA), Leuven, Belgium, May 16-20, 1998, vol. 4, pp. 3715–3720.

[7] V. Varveropoulos, “Robot Localization and Map Construction Using Sonar Data”, The Rossum Pro-ject: 2000. Available online: http://www.rossum.sourceforge.net/papers/ Localization (accessed on 17 January 2012).

[8] A. Howard, “Multi-robot Simultaneous Localiza-tion and Mapping using Particle Filters”, Int. J. Ro-bot. Res., vol. 25, 2006, pp. 1243–1256.

[9] RWTH - Mindstorms NXT Toolbox, RWTH Aachen University, 2010. Available online: http://www.mindstorms.rwth-aachen.de/trac/ wiki (accessed on 18 August 2011).

Fig. 12. Particles moving with the robot’s actual move-ment. (a) The cloud of green particles is near the right hand side boundary (b) The particles move upwards and left, further away from the boundary

Page 32: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 31

Synthesis of Quasi-Optimal Motion Crane’s Control in the Form of Feedback

Vjacheslav Lovejkin, Yuriy Romasevich, Yuriy Chovnuk

Submitted: 3rd February 2013; accepted: 17th May 2013

Abstract: The solution of the problem of optimal crane’s control is proposed in this article. The crane’s model is adopted as two-mass. The synthesized quasi-optimal control al-lows one to eliminate vibrations during braking load of the crane. Control is a function of phase coordinates of dynamic system ”truck-load“ and it’s limited in size. One may use for the solution of the problem the method of dynamic programming. The results are illustrated with the help of graphics which are bold on the phase planes.

Keywords: quasi-optimal control, crane, load vibrations, dynamic programming, control in the form of feedback

1. IntroductionThe handling of various cargoes with a help of

bridge cranes is widespread. They are used in sea and river ports, factories of chemical and metallurgical industry, mechanical engineering and more. Bridge type cranes often work in unsteady operating modes (start, stop, reverse). It is known that the default mode of motion of the crane may be absent in general case. Dynamic processes occurred during the tran-sient motion of crane mechanisms may determine the efficiency of the crane, as well. The cargo usually is fixed on a rope and its vibrations affect on the perfor-mance, reliability and efficiency of bridge crane. The problem of eliminating of load’s vibrations for port’s reloaders and steel valves is particularly relevant. In the first case, the elimination of load’s vibrations in-creases crane’s productivity and reduces the idle of the ship in port in the first case and increases safety of the work in the second one.

Vibrations of the load on a rope appear during transient motion of the crane, continue during its steady movement phase and are present even after crane’s stop. It is desirable to eliminate these crane’s vibrations as quickly, as it is possible [1]. However, the optimal control of velocity’s action to eliminate the load’s vibration significantly increases the dynamic load of crane’s elements and this crane can quickly fail. One may use other methods of solving this prob-lem. For example, one may use fuzzy-controllers [2–5]. The disadvantage of such methods is that they do not include the restrictions imposed on the drive mechanism of the crane, also that load vibrations may have big amplitude during the transient process.

One may use the passive damping devices for the elimination of load’s vibrations. By the way, there are

a number of ways that are patented and used by dif-ferent companies [6–8]. The main drawback of these methods is that they do not provide optimal control. That’s why the problem of finding of the optimal con-trol of crane’s load oscillations during its removal is very important.

2. References 2.1. Problem of Optimal Control

For the research purpose onne may take the two-mass dynamic model of the mechanism of movement of the crane which is performed in Fig. 1. This model is common and is used by many researchers [9–11]

Fig. 1. A dynamic model of the system ”crane-load”

The above mentioned calculation model (Fig. 1) is described by a system of differential equations:

+ = − ⋅

+ − =

1 1 2 2 1

2 2 1

sign( );

( ) 0,

m x m x F W x

gx x x

l (1)

where m1 is the mass given to the translational motion of the drive mechanism and the weight of the crane; m2 is the mass of cargo; x1, x2 are the coordinates of the centers of mass of the crane and of the cargo; g is the acceleration of the free fall; l is the length of the flexible suspension; F is the total traction or braking force which is acting on the crane; W is the power of resistance given to the movement of the crane. Point over a symbol means differentiation in time.

We assume that when the crane is moving during braking it does not change its speed, that is sign( )=1.One may take the system of equations (1) in a canoni-cal form. Let’s add one more equation for the function of control‘s changes:

Page 33: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles32

ϕ

= = = = − Ω =

0 1

1 2

2 32

3 2

;;;

;,

y y

y y

y y

y u y

u (2)

where y0 is the function proportional to the coordinate the load ( = Ω2

0 2 0/y x ); Ω is the own frequency of load’s vi-bration relatively movable crane ( Ω = +1 2 1( ) /m m g m l );Ω0 is the own frequency of load’s vibration relative to the fixed crane ( Ω =0 /g l ); u is the function of con-trol of dynamic system “crane-load” ( = − 1( )/u F W m );φ is the function of rate of change of control. The re-strictions imposed on the control u are in the form of inequalities:

−≤ = max

max1

,F Wu u

m (3)

where Fmax is the maximum force over the crane, which corresponds to the maximum torque on the motor shaft.

The movement of the crane with a load is charac-terized by initial conditions which are recorded for the new phase coordinates y0, y1, y2, y3 as follows:

− Δ −= = ≈

− Δ −= = ≈

− Δ −= = ≈

− Δ −= = ≈

2 1 10 2 2 2

0 0 0

2 1 11 2 2 2

0 0 0

2 1 12 2 2 2

0 0 0

2 1 13 2 2

0 0

(0) (0) (0) (0) (0)(0) ;

(0) (0) (0) (0) (0)(0) ;

(0) (0) (0) (0) (0)(0) ;

(0) (0) (0) (0)(0)

х х х х ly

х х х х ly

х х х х ly

х х х х ly

a

W W W

a

W W W

a

W W W

W W

20

(0) ,a

W (4)

here ∆х is the difference of coordinates of the crane and load (∆х= х1- х2); α – is the angle of the rope load with the vertical. The system (4) used an approximate estimation follows ∆х≈lα from the fact that sinα≈α, for the small values of α. This approach does not give sig-nificant errors.

The initial conditions (4) allow one to determine parameters of motion of the crane and of the load which must be measured. This is necessary to de-termine these conditions and for their default at the crane’s system control. One must measure the coor-dinate of crane’s position and its higher derivatives in time up to the third as well, a length of rope and rope angle of the load from the vertical and its high-er derivatives in time up to the third as follows from this system. These parameters are measured with a help of the three encoders. One encoder measures the length of rope. It’s installed on the cable drum. The second encoder measures the position of the crane relative to zero. The third encoder measures the angle of the rope load from the vertical. Its output shaft is attached to the rope with a help of special fittings.

The following final conditions must be performed in order to eliminate load’s oscillations during the moment when crane is putting on the breaks:

= = =

1

2

3

( ) 0;( ) 0;( ) 0.

y T

y T

y T (5)The first condition in (5) is equivalent to the situa-

tion when the load’s speed is equal to zero, the second condition is equivalent to the situation when the dif-ference of coordinates of the crane and cargo is equal to zero, the third condition in (5) is equivalent to the situation when the difference in speed of the crane and load is equal to zero. So, the amount of energy’s oscillations of the load and of the crane’s movement should be equal to zero just at the moment t=T. This situation means the crane’s stopping and the lack of load’s vibration.

In order to create the synthesis of control one must set the criterion of optimality which will deter-mine only the one optimal control of the entire set of alternatives. The criterion of optimality of motion of the crane during its braking may be adopt as such in-tegrated functionality:

(6)

δ δ δ ϕ=

=

= + + → ∑∫

32 2 2

4 510

min,T n

i ii

I y u dt

where δ1, δ2, δ3, δ4, δ5 are some coefficients. These co-efficients can be calculated as follows:

δ = = , 1, 2, 3, 4, 5,j j jk I j (7)

where kj – the weight’s coefficient that takes into ac-count the respective importance of the j-th index in the structure of the criterion; Ĩj – a factor that brings the dimension of the j-th index in the structure of the criterion (6) to dimensionless form.

Criterion (6) is an integrated linear-quadratic in-tegral criterion and it reflects both the phase coordi-nates of the dynamical system and the “costs” to its control as well.

Thus, one staged the task of the optimal control of the dynamic system “truck-load”. The problem is that the dynamic system must be converted from the original position which is characterized by initial con-ditions (4) into the final one which is characterized by finite terms of (5). This optimality criterion (6) should be least. In addition, one imposes the restricts on control in the form of inequality (3) and the end of control T is unstable.

2.2. Synthesis of Optimal ControlWe use the method of dynamic programming [12]

for solving the problem of optimal control. This meth-od of synthesis of optimal control lets one to know the control as a function of phase coordinates of dy-namical systems. This control is in the form of feed-back. The basic functional equation for this problem is written as follows:

δ δ δ ϕ

ϕ

=

=

∂ ∂+ + + + × ∂ ∂

∂ ∂ ∂ × + + − Ω + =∂ ∂ ∂

∑3

2 2 24 5 1

1 0 1

22 3 2

2 3

min

( ) 0,

n

i ii

S Sy u y

y y

S S Sy y u y

y y u (8)where S – Bellman’s function.

Page 34: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 33

The problem will be solved for the case when the control u is unrestricted (3). This circumstance gives one the possibility to find an analytical solution of the problem. However, we will consider the inequality (2) in future. One may search the minimum of the right side of the equation (8) for the function φ. Let’s differ-entiate it by the function φ and then equate the result to zero:

δ ϕ ∂

+ =∂52 0.S

u (9)

We find from equation (10) function:

ϕδ

∂= −

∂5

1 .2

Su (10)

Let’s put the equation (10) into the equation (8). Then we have:

δ δ δ

δδ

∂ ∂+ + + + + ∂ ∂

∂ ∂ ∂ + + − − Ω = ∂ ∂ ∂

21 1 2 2 2 3 3 3

1 2

22

4 23 5 3

1 0.4

S Sy y y y y

y y

S S Su u y

y u y (11)

Equation (11) is a nonlinear differential equation in partial derivatives. We seek its solution in the form of a quadratic form as one does this usually when solving similar problems [13]:

= + + + + +

+ + + + +

2 2 2 21 1 2 2 3 3 4 5 1 2

6 1 3 7 1 8 2 3 9 2 10 3 ,S A y A y A y A u A y y

A y y A y u A y y A y u A y u (12)

where А1, А2, А3, А4, А5, А6, А7, А8, А9, А10 – constant coef-ficients to be determined.

Take the partial derivatives of expression (12) for functions y1, y2, y3 and u, and:

∂= + + +

∂ 1 1 5 2 6 3 71

2 ,SA y A y A y A u

y (13)

∂= + + +

∂ 5 1 2 2 8 3 92

2 ,SA y A y A y A u

y (14)

∂= + + +

∂ 6 1 8 2 3 3 103

2 ,SA y A y A y A u

y (15)

= + + +∂ 7 1 9 2 10 3 42 .S

A y A y A y A uu (16)

Let’s substitute expressions (13)–(16) in equation (11) and then remove of common factors of the brack-ets. We get:

(18)

Equation (18) is true in the case when the expres-sion in parentheses will be zero because the functions y1, y2, y3, u can’t be zero at the same time. Therefore, equation (18) can be replaced by a system of nonlin-ear algebraic equations:

− =

+ − − Ω = + − = + − = − − Ω = − =

− =

+ − − Ω =

+ − − Ω =

+ − =

27

15

229

5 2 85210

8 3524

10 45

27 91 6

5

10 75

5

4 96

5

210 92 6 3

5

24 97 8 10

5

10 43 9

5

0;4

0;4

0;4

0;

2 0;20;20;

2 0;20;

2 0.

А

АА А

АА

АА

А АА А

А АА

А АА

А АА А А

А АА А А

А АА А

δδ

δδ

δδ

δδ

δ

δ

δ

δ

δ

δ (19)

The system of equations (19) may be solved in analytical. But it is too difficult. So let’s simplify it. The expression (10) may be as follows taking into account formula (16):

+ + +

= − 4 7 1 9 2 10 3

5

2 .2А u А y А y А y

ϕδ (20)

Thus, in order to find the unknown function φ which is the first derivative of the function control of dynamic system one must find only four coefficients А4, А7, А9, А10. It’s necessary to form four equations in order to know these coefficients. The first and fourth equation of (19) contains only the coefficients А4, А7, А9, А10 so we will use them. One can get from equa-

)

− + + − − Ω + ×

× + − − + + − + ×

× − − Ω + − + ×

× − + + − − ×

×Ω +

222 2 2 2971 1 2 5 2 8 3

5 52 2

210 48 3 10 4 1 2

5 5

27 9 10 71 6 1 3 5 1

5 5

4 9 10 96 2 3 2 6 3

5 5

22 7

4 4

4

2 2 2

22

ААy y А А y

А АА u А y y

А А А АА А y y А y

А А А Аu А y y А А А

y u А

δ δδ δ

δ δδ δ

δ δ

δ δ

( ( + − − Ω + +

+ − + − =

24 98 10 3 3

5

10 49 3 9

5

2

2 0.

А АА А y u А

А АА А А

δ

δ

Page 35: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles34

tions (19) the third and fourth equation in which co-efficients А4, А7, А9, А10 are unknown. We obtain third equation when rewrite the second equation of (19) taking into account the third and sixth equations of the system. We get the fourth equation when rewrite the ninth equation of system (19) taking into account the third equation of the last system. As a result, we have:

− =

+ − = + − − − Ω =

+ − − − Ω =

27

15

24

10 45

2 2210 7 9 10

2 35 5 5

2210 4 9

7 3 105 5

0;4

0;

0;2 4 4

0.4

А

АА

А А А А

А А АА A

δδ

δδ

δ δδ δ δ

δδ δ (21)

The first equation of (21) is independent of others and we can immediately write:

=7 1 52 .А δ δ (22)

Negative root is rejected because it can lead to unstable dynamical system. We can express the un-known coefficients А10 and А9 by the coefficient А4: (23)

= −24

10 45

,АА δ

δ

(24)

= ± − −

− − Ω + + Ω

24

9 4 1 5522

2 244 5 2 3

5

4

4 ( ) .

АА

А

δ δ δδ

δ δ δ δδ

The system of equations (21) leads to one algebra-ic equation of eighth degree relative when one takes into account expressions (22)–(24):

+ + + + =8 6 4 24 1 4 2 4 3 4 4 0.А B А B А B А B (25)

The last one is reduced to the equation of fourth degree when we will use replacement 4

24

~ÀÀ = :

+ + + + = 4 3 24 1 4 2 4 3 4 4 0.А B А B А B А B (26)

Equation (26) may be solved by Descartes-Euler’s method. We will not bring solutions of these equa-tions because they have significant volume. We note only that equation (26) has two real and two complex solutions. One can find eight roots of the equation (25) turning to the reverse substitution = ±

44À À . Thereafter, we choose only one – the real and positive root. Furthermore, we choose sign “+” before the root in expression (24) for the unambiguous determina-tion of the coefficient А9. Thus, all complex and nega-tive values of coefficients А4, А9 that satisfy the system of equations (21) are rejected because they can lead to the instability of dynamical system “crane-load”.

The expression (20) may be used to find a function φ that is the first derivative of the control’s function u over time. We need to get just the same control’s func-tion in a such manner u= u(y0, y1, y2, y3).

One must to integrate the expression (20) for this purpose:

(27)

where C – is the constant of integration. In order to find the constant of integration it is necessary to solve the following equation u(0)=u0 which in expanded form will take such a form:

− − − − + =10 94 73 2 1 0 0

5 5 5 5(0) (0) (0) (0) .2 2 2

А АА Аy y y y C u

δ δ δ δ (28)

One may find the solution of equation (28) and then substitute it in the expression (27). We will have finally such control’s function u:

( ) ( )

( ) ( )

= + − + − +

+ − + −

1040 3 3 2 2

5 5

9 71 1 0 0

5 5

(0) (0)2(0) (0) .2 2

ААu u y y y y

А Аy y y y

δ δ

δ δ (29)

So we got control’s function which depends on the initial control and on phase coordinates as well. We can set arbitrarily the initial control’s function. In the particular case u0=0. This means no dynamic efforts over the crane’s drive at the beginning of its inhibi-tion, in practice. The risk of significant current in elec-tric and dynamic loads of the mechanical part of the crane’s drive and its metal faucet is eliminated as well.

Let’s build a graph (Fig. 2) for the resulting con-trol’s law. There is also the three-dimensional phase portrait of dynamical system (Fig. 3). The gray point in Fig. 3 marks origin of the coordinate system.

Fig. 2. Graph of the function of optimal control of dynamic system ”crane-load”

Page 36: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 35

Fig. 3. Three-dimensional phase portrait of motion of dynamic system ”crane-load”

The dynamic system has zero energy of motion in the origin of coordinate system, i.e., crane stopped and load’s oscillations on the rope are absent. Thus the problem of optimal control can be considered as a solved problem. However, we do not take into account the constraints (3) when solved this problem. These constraints are usually imposed on control. Physi-cally, this means that electric drive will occasionally transshipped and will not be able to realize the opti-mal control. It is therefore necessary to take into ac-count these constraints (3).

2.3. Analysis of the Results (Synthesis of Quasi-optimal Control)

An easy way to take into account constraints (such as (4)) is to miss the optimal control signal through a nonlinear element such as “saturation”. Such con-trol will be called as quasi control because it consists of the pieces of optimal control and of the pieces of maximum and minimum values of control. Analyti-cally this is expressed in the following form:

≤ ≤= < >

min max

min min

max max

, ;* , ;

, ,

u if u u u

u u if u u

u if u u (30)

where u* – quasi-optimal control that satisfies con-straints (3); umin, umax – respectively the minimum and maximum control. Here are the graphs similar to the above in Fig. 4 and Fig. 5 for the case umin= –0,4 m/s2 and umax= 0,4 m/s2. Onfie may see from the resulted graphs that the control does not go to the upper lim-it. Let us narrow the limits of permissible values of controls: umin= –0,2 m/s2 and umax= 0,2 m/s2. Physical-ly, this means that the drive motor power is reduced by the half. So it is possible to project the crane’s mo-tor of less power. However, the duration of the tran-sition process is increasing as seen from Fig. 6 and Fig. 7. Thus, one can reduce the crane’s drive power when the duration of the transition process is in-creased.

Fig. 4. Graph of the function of the quasi dynamic control of the system “crane-load” while respecting the constraints (3) umin= -0,4 m/s2 and umax= 0,4 m/s2

Fig. 5. Three-dimensional phase portrait of the motion of the dynamic system “crane-load” while control is (30) (umin= -0,4 m/s2 ,umax= 0,4 m/s2)

Fig. 6. Graph of the function of the quasi dynamic con-trol system “crane-load” while control is respected the constraints (3) (umin=-0,2 m/s2 and umax=0,2 m/s2)

Page 37: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles36

Fig. 7. Three-dimensional phase portrait of the motion of the dynamic system “crane-load” while control is (30) (umin= -0,2 m/s2 , umax= 0,2 m/s2)

Fig. 7 shows that the lack of narrowing of the range of allowable values of control is the changing of the sign of the crane’s speed. One can also specify another disadvantage of the optimal control as the quasi-optimal function. The control is too small value when the phase coordinates of the dynamical systems “crane-load” are small as well. It means that at the end of transition period control is “weak.” The possible way to solve this problem is to change the variety co-efficients kj, which are included in the structure of the optimization criterion of the transition process.

3. ConclusionOne may use the method of dynamic program-

ming which allows to synthesize the optimal control in the form of feedback without restrictions on the amount of control. The use of nonlinear elements such as “saturation” provides a quasi-optimal control that satisfies the limits imposed on the control just at the every time’s moment. This quasi-optimal control in the form of feedback consists of pieces of optimal control and of the boundary limits of the acceptable area. The variation of the coefficients in the struc-ture’s optimization criterion is the possible way to solve the problem of the synthesis of the optimal con-trol which would always be in the acceptable limits even when these limits are the functions of the time and of the phase coordinates of the dynamical system “crane-load”.

AuthORSVjacheslav Lovejkin, Yuriy Romasevich*, Yuriy Chovnuk – Chair of Machine Construction, National University of Life and Environmental Sciences of Ukraine, Heroyiv Oborony st., 15, Kyiv, 03041, Ukraine.E-mail: [email protected].

* Corresponding author

References[1] Sakawa Y., Shindo Y. “Optimal control of

container cranes”, Automatica, vol. 18, no. 3, 1982, pp. 257–266.

[2] Hanafy M. Omar, Control of Gantry and Tower Cranes. – Ph.D. Dissertation, Blacksburg, Virginia, Virginia Polytechnic Institute, 2003.

[3] Mahdieh A., Zarabadipour H., Mahdi A. S., “Anti-swing control of a double-pendulum-type overhead crane using parallel distributed fuzzy LQR controller“, International Journal of the Physical Sciences, vol. 6(35), 2011, pp. 7850–7856.

[4] Chengyuan C., Shihwei H., Kuohung C., “A practical fuzzy controllers scheme of overhead crane“, Journal of Control Theory and Applications, vol 3, 2005, pp. 266–270.

[5] Mohammad R., Akbarzadeh T., Amir H. “Fuzzy Modeling of Human Control Strategy for Head Crane”. In: IEEE International Fuzzy Systems Conference, 2001, pp. 1076–1079.

[6] Kogure H., Tojo M. “Recent developments in crain control”, Hitachi Rev., vol 6, 1978, pp. 315–320.

[7] SmartCrane™ Anti-Sway Crane Control Products, Product Descriptions, 2010 SmartCrane, LLC.

[8] Siemens SIMOCRANE Crane Management System, System Manual, Valid for version 4.1, 2009.

[9] Solihin M. I., Wahyudi “Sensorless Anti-swing Control for Automatic Gantry Crane System: Model-based Approach“, International Journal of Applied Engineering Research, vol. 2, no.1, 2007, pp. 147–161.

[10] Ahmad M.A., Raja Ismail R.M.T., Ramli M.S., Abdul Ghani N.M. , Zawawi M.A., “Optimal Tracking with Sway Suppression Control for a Gantry Crane System“, European Journal of Scientific Research, vol. 3, no. 4, 2009, pp. 630–641.

[11] Keum-Shik H., Quang Hieu N., “Port Automation: Modeling and Control of Container Cranes“. In: International Conference on Instrumentation, Control & Automation, Bandung, Indonesia, October, 2009, pp. 1076–1079.

[12] Bellman R., Dreyfus E., Applied Dynamic Programming, Princeton University Press, Princeton, USA, 1962.

[13] Kwakernaak H., Sivan R., Linear Optimal Control Systems, John Wiley & Sons Inc., New York, 1972.

Page 38: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 37

Algorithms for Packing Soft Blocks of VlSi Systems

Marcin Iwaniec, Adam Janiak

Submitted: 1st August 2012; accepted: 30th January 2013

Algorithms for Packing Soft Blocks Of Vlsi Systems Marcin Iwaniec, Adam Janiak

Submitted 1st August 2012; accepted 30th January 2013

Abstract: This paper contains a review of literature concerning the packing of hard-blocks (of fixed dimensions) and soft-blocks (of fixed area – changeable within specified constraints). These considerations are applicable to the designing of large scale integration chips. In order to solve the problem of packing soft-blocks, three algorithms are introduced and compared: simulated annealing, heuristic constructional algorithm based on five operations to improve packing quality and the algorithm which combines two previous algorithms. Experiments were conducted to compare these algorithms to the best from the literature. Keywords: soft-blocks, hard-blocks, VLSI, packing problem. 1. Introduction

The continuous development of technology facilitates the lives of the users of modern devices, on the one hand, but also generates new problems to be faced by designers and engineers. One of such problems that challenge designers can be a physical synthesis of very large scale integration (VLSI) layouts. When one deals with billions of elements to be packed on a chip board (e.g. the most recent graphic card AMD Radeon HD 7970 has GPU with 4.31 billion transistors), the synthesis, which is the implementation of the physical packing of chip elements and interconnections according to a logical scheme, becomes a critical point at the stage of design because it directly influences the chip price. For example, a 1% increase in the size of an Intel Pentium 4 chipset would result in the annual increase of production costs amounting to $63.5 million (1.5%), whereas a size increase by 15% would induce the additional annual cost of $961 million (22%), see [7]. This implies the need to create algorithms which minimize the area needed to pack elements (or blocks, in general) on the chip surface at preset constraints in order to obtain the correct packing of elements and interconnections.

Two types of blocks have been distinguished: rectangle of fixed dimensions referred to as hard-block and rectangle of fixed area referred to as soft-block (additional constraints may also exist). Recent methods concerning block packing are largely based on so-called sequence-pair, introduced by [14], and a constraint graph. However, existing approaches for hard-blocks based on sequence-pair and constraint graph cannot perform for soft-blocks, especially if no

starting dimensions are specified at the beginning. Nevertheless, it is necessary to consider algorithms solving the problem of minimizing the area of floorplans for soft-blocks, because they are frequently used at the early stages of chip designing, when not all the details have been provided.

Many algorithms for the soft-blocks packing problem may be found in the literature. Most of them were initially created to deal with the hard-blocks packing problem, and were adapted to soft-blocks as a consequence. Kirkpatrick in [11] described a simulated annealing (SA) algorithm which, since the first time it was used [15], has become one of the most frequently applied algorithms for packing hard-blocks ([19], [3], [21], [4], [9], [5]). Also other metaheuristics algorithms exist, e.g. genetic algorithm [16], evolutionary algorithm ([18], [12]) or an algorithm, which is based on the primary principles of ant colony optimization [6]. There are also some implementations of SA for soft-blocks. The simplest one randomly modifies block dimensions within possible range during the neighbourhood modification [13]. Kang and Dai in [10] presented another heuristic method of how to calculate optimum width to height ratio regardless of packing generated by simulated annealing algorithm. A more sophisticated method of blocks adjustment during neighbourhood modifications was designed in [8]. Block dimensions adjustment based on Lagrange relaxation algorithms was introduced in [23]. The branch and bound algorithm in [1] gives the best results – in some cases the solution derived from this algorithm is optimal, but the time needed for finding a solution is unacceptable.

In this paper we propose a quick heuristic (constructional) algorithm for floorplan minimization for soft-blocks. This algorithm requires an initial packing, which is then modified to eliminate empty spaces between blocks. The idea is based on the definition of soft-blocks – ability to change height and width within specified constraints. In this algorithm, were introduced some operations to improve the quality of the solution. The method was implemented in such a way, so that it could start from initial packing provided by any algorithm. In this paper, to generate the initial packing, simple implementation of simulated annealing algorithms based on the sequence-pair packing representation was used.

This paper is organized as follows. The problem is formulated in section 2. Then, section 3 provides a description of sequence-pair and constraint graph used to represent packing. Simulated annealing algorithm was described in section 4 and the essential

Algorithms for Packing Soft Blocks Of Vlsi Systems Marcin Iwaniec, Adam Janiak

Submitted 1st August 2012; accepted 30th January 2013

Abstract: This paper contains a review of literature concerning the packing of hard-blocks (of fixed dimensions) and soft-blocks (of fixed area – changeable within specified constraints). These considerations are applicable to the designing of large scale integration chips. In order to solve the problem of packing soft-blocks, three algorithms are introduced and compared: simulated annealing, heuristic constructional algorithm based on five operations to improve packing quality and the algorithm which combines two previous algorithms. Experiments were conducted to compare these algorithms to the best from the literature. Keywords: soft-blocks, hard-blocks, VLSI, packing problem. 1. Introduction

The continuous development of technology facilitates the lives of the users of modern devices, on the one hand, but also generates new problems to be faced by designers and engineers. One of such problems that challenge designers can be a physical synthesis of very large scale integration (VLSI) layouts. When one deals with billions of elements to be packed on a chip board (e.g. the most recent graphic card AMD Radeon HD 7970 has GPU with 4.31 billion transistors), the synthesis, which is the implementation of the physical packing of chip elements and interconnections according to a logical scheme, becomes a critical point at the stage of design because it directly influences the chip price. For example, a 1% increase in the size of an Intel Pentium 4 chipset would result in the annual increase of production costs amounting to $63.5 million (1.5%), whereas a size increase by 15% would induce the additional annual cost of $961 million (22%), see [7]. This implies the need to create algorithms which minimize the area needed to pack elements (or blocks, in general) on the chip surface at preset constraints in order to obtain the correct packing of elements and interconnections.

Two types of blocks have been distinguished: rectangle of fixed dimensions referred to as hard-block and rectangle of fixed area referred to as soft-block (additional constraints may also exist). Recent methods concerning block packing are largely based on so-called sequence-pair, introduced by [14], and a constraint graph. However, existing approaches for hard-blocks based on sequence-pair and constraint graph cannot perform for soft-blocks, especially if no

starting dimensions are specified at the beginning. Nevertheless, it is necessary to consider algorithms solving the problem of minimizing the area of floorplans for soft-blocks, because they are frequently used at the early stages of chip designing, when not all the details have been provided.

Many algorithms for the soft-blocks packing problem may be found in the literature. Most of them were initially created to deal with the hard-blocks packing problem, and were adapted to soft-blocks as a consequence. Kirkpatrick in [11] described a simulated annealing (SA) algorithm which, since the first time it was used [15], has become one of the most frequently applied algorithms for packing hard-blocks ([19], [3], [21], [4], [9], [5]). Also other metaheuristics algorithms exist, e.g. genetic algorithm [16], evolutionary algorithm ([18], [12]) or an algorithm, which is based on the primary principles of ant colony optimization [6]. There are also some implementations of SA for soft-blocks. The simplest one randomly modifies block dimensions within possible range during the neighbourhood modification [13]. Kang and Dai in [10] presented another heuristic method of how to calculate optimum width to height ratio regardless of packing generated by simulated annealing algorithm. A more sophisticated method of blocks adjustment during neighbourhood modifications was designed in [8]. Block dimensions adjustment based on Lagrange relaxation algorithms was introduced in [23]. The branch and bound algorithm in [1] gives the best results – in some cases the solution derived from this algorithm is optimal, but the time needed for finding a solution is unacceptable.

In this paper we propose a quick heuristic (constructional) algorithm for floorplan minimization for soft-blocks. This algorithm requires an initial packing, which is then modified to eliminate empty spaces between blocks. The idea is based on the definition of soft-blocks – ability to change height and width within specified constraints. In this algorithm, were introduced some operations to improve the quality of the solution. The method was implemented in such a way, so that it could start from initial packing provided by any algorithm. In this paper, to generate the initial packing, simple implementation of simulated annealing algorithms based on the sequence-pair packing representation was used.

This paper is organized as follows. The problem is formulated in section 2. Then, section 3 provides a description of sequence-pair and constraint graph used to represent packing. Simulated annealing algorithm was described in section 4 and the essential

Page 39: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles38

constructional algorithm in section 5. Section 6 demonstrates the results of experiments and it is followed by a summary and conclusion in section 7.

2. Problem Formulation

A set B = B1, B2, …, Bi, …, Bn of soft-blocks is given to be packed into rectangular area. Each block Bi is described by its area Ai as follows:

∶= ℎ , (1) where hi – block height, wi – block width, further-more, the ratio =

is in the range:

, = 1, 2, , , (2) where kmin and kmax are explicitly given parameters.

The packing which satisfies requirements (1) and

(2), and no two blocks overlap, would be called a feasible one.

The problem is to find a feasible packing where dead-space (DS) is as close to 0% as possible. The minimized criterion DS is given as follows [17]:

= ∑

∑ × 100 [%] , where (3)

− U is a total area of a floorplan for packing and − ∑

is a sum of areas of packed blocks (Ai is an area of block i). To solve this problem, the simulated annealing

algorithm is used (section 4) modified for a considered problem and a constructional algorithm using the following operations: sliding, scaling, scaling with displacement, eliminating dead spaces and rotating layout (section 5). The third algorithm proposed in this paper combines both previous algorithms. 3. Floorplan Representation – Sequence-pair

and Constraint Graphs Both sequence-pair and constraint graphs are used to

describe topological relation of blocks within a packing. A sequence-pair is an ordered pair of n-element block index sequences, both sequences contain exactly the same elements (indices of all blocks from the set B) arranged in specific order. The position of two particular elements in both sequences determines the topological relation of them. For given blocks indices i,j ∈ 1, 2, 3, …, n and i ≠ j, relation of blocks i and j will be horizontal, if in both sequences i comes first before j (4). Otherwise, the relation is vertical (5).

(< .., i, .., j, .. >,< .., i, .., j, .. >) i j (4)(< .., j, .., i, .. >,< .., i, .., j, .. >) i j (5)

The topological relations of blocks can also be

calculated by using constraint graphs, provided that one knows the height and width of every block. The horizontal (vertical) graph marked as Gh (Gv) has n

vertices and n edges of specified weight. Vertices represent blocks, whereas edges represent topological relations. The edge between the vertex i and vertex j of weight equal to wi (width of block i) is added to the horizontal graph when the condition (4) is fulfilled. If the condition (5) is fulfilled, the edge between vertex i and j of weight equal to hi (height of block i) is added to the vertical graph. Thus one adds an edge to Gh and Gv for every pair of blocks in both sequences. Created constraint graphs are used to calculate a width (from the graph Gh) and a height (from the graph Gv) of the whole floorplan represented by these graphs. For this calculation one can use, for instance, the algorithm finding longest path in the graph [20].

In the Cartesian coordinate system the horizontal relation of i and j means that on the plane, block j is situated to the right of the block i, so xj ≥ xi + wi, where xi is a horizontal coordinate of the lower left vertex of the block i and wi is a width of the block i. In the same way, for the vertical relation, block j is situated over the block i, so yj ≥ yi + hi, where yi is a vertical coordinate of the lower left vertex of the block i and hi is a height of the block i. As a consequence, horizontal relation determines packing of blocks from left to right, and the vertical one from the bottom to the top. 4. Simulated Annealing Algorithm

The method described in [11] (or rather methodology, to be more specific) called simulated annealing, perfectly simulates the process, which is well-known in metallurgy as annealing (heating up an element and then cooling it down very slowly). Parameters required to apply this method are initial temperature (Tp), terminating temperature (ϵ), temperature change (α), neighbourhood function (to generate a new state) and goal function (calculation of state energy).

A great advantage of simulated annealing is that in the course of calculations, a worse solution can be accepted that can lead to a better one finally This method is partially based on a modified version of the Monte Carlo method called random walk method. This method assumes an acceptance of the obtained result that is worse than the best one, which had been found so far. The probability of acceptance of a worse solution is expressed by formula exp (∆

), where ∆energy is a difference between the previous and current state energy, and t is a current temperature. Temperature dependence induces a higher probability that worse results shall be accepted at the first stage of method, when the temperature is higher. The lower the temperature is, the more often better results are accepted. As a consequence, during the final stage when the temperature is the lowest, the current solution can be improved as much as possible without replacement.

In this paper an algorithm was constructed, which is based on this method. During the annealing process, a sequence-pair is modified very simply, by the random exchange of two elements and the next obtained solution is evaluated by calculating the packing area (this is the cost function). Initial sequence-pair is generated randomly and used both in SA and the proposed algorithm described in the next section.

Page 40: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 39

5. Constructional Algorithm In this section, a new heuristic constructional

algorithm is presented, which uses the operations described below to improve a solution. The algorithm starts from the same initial solution as simulated annealing and executes the following steps to improve this solution: Step 1. Save the initial solution IS as a current

solution CS (CS = IS) and as the best solution BS (BS = IS). Set rotation to 0o (R = 0o).

Step 2. Modify CS by executing operations in the following order: blocks sliding, blocks scaling, scaling with displacement, elimination of dead empty spaces; until the area is reduced as much as possible.

Step 3. If CS is better than BS (the dead space (DS) of CS is smaller than in BS), save CS as BS (BS = CS).

Step 4. If rotation is 270o (R = 270o), go to Step 5. If rotation is different than 270o, add 90o to current rotation value (R = R + 90o), save IS as CS (CS = IS) and rotate CS clockwise by R value, then go to Step 2.

Step 5. End of the algorithm, BS is the final solution given by constructional algorithm.

5.1 Blocks Sliding

To simplify the description, only the first quarter of the Cartesian coordinate system is used, where both coordinates (x, y) are positive, so the lower left corner of the floorplan is in the point (0, 0). Blocks sliding is the operation of moving blocks as close to the origin of the coordinate system as possible – down and left (two separate operations), only if there is enough space to move them – blocks cannot overlap after being moved. Sliding takes place recursively, beginning with the blocks, which are closest to the point of origin (from the left or from the bottom) – Figure 1.

a) initial packing b) after sliding

Fig. 1. Example of blocks sliding to the left 5.2 Blocks Scaling

Blocks sliding operation eliminates empty spaces, which are on the left or below each of the blocks. Soft-blocks allow constrained modification of height and width. Using this property for each block with an empty space on the right, the algorithm changes the width or height (within the preset constraints – see (2)) to fill in the empty space if it is possible. This operation is called blocks scaling.

Figure 2 shows an example of blocks scaling. The height of the first two blocks in the lower left corner

(blocks 2 and 6) was increased and the width was decreased (the overall area did not change). As a result, the empty space over blocks was filled by reducing the width of 2 and 6 and this enabled shifting other blocks from right side to the left.

a) initial packing b) packing after scaling

Fig. 2. Example of blocks scaling

a) initial packing

b) horizontal scaling

c) blocks displacement

and scaling d) final packing

Fig. 3. Example of scaling with displacement 5.2 Scaling with Displacement

The two operations described above (sliding and scaling) can be used independently, but in some cases, using sliding and scaling independently does not give the desired results, e.g. the packing in Figure 3a is specific – the empty space is only around block 5. Therefore, sliding causes no changes and scaling would only turn the square block 5 into a rectangle – Figure 3b. In this case the desired result was not achieved – the area of the floorplan was not reduced.

Scaling with displacement is an ‘intelligent’ scaling performed as follows: the block with an empty space above is moved in up and scaled in order to increase height at the expense of width. The same action is performed also for each block which is below the moved one (Figure 3c). Then the algorithm starts to slide blocks – all of them are moved down as far as possible. The whole cycle is repeated until there are no empty spaces above the blocks (Figure 3d) or the height or width constraint is reached.

The above description refers to the operation of scaling with displacement for vertical orientation.

Page 41: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles40

The algorithm also works the same for horizontal orientation.

a) initial packing b) packing after the elimination of

the deadlocked empty spaces

Fig. 4. Example of eliminating deadlocked empty spaces between blocks 9, 8, 0, 1 and 7, 1, 2, 4

a) initial packing b) packing without rotation

Fig. 5. Example of optimization without rotation 5.4 Elimination of Deadlocked Empty Spaces

In some cases there are deadlocked empty spaces between four adjacent blocks, e.g. empty space between blocks 9, 8, 1 and 0 or 7, 1, 2 and 4 in Figure 4. Such a situation cannot be solved using the previous operations, because none of the blocks can be displaced or scaled. Any action would cause an overlap, which is not acceptable.

Optimization starts from spreading the blocks apart. For the example in Figure 4, the following operations will be performed: block 9 is expanded to the width of block 8 to fill in the empty space. At the same time, all blocks on the right of block 9 are shifted to the right by the resulting difference, so that they do not overlap (block 9 is a little different in its width from block 8 which is a consequence of other operations performed later). Analogically, in the second case, block 1 is expanded to the width of block 7 and block 2 is moved to the right so that it does not overlap with the expanded block 1.

5.5 Layout Rotation

The four previous operations do not solve the problem, when the empty space is located in the middle of the layout or close to the origin of coordinate system (see an example of the described packing – Figure 5). In this case, the empty space is moved as close to the upper right corner of floorplan as possible by rotating the whole layout. Rotation is made in clockwise direction, around the origin of coordinate system by 90° or a multiple of it.

Figure 5 shows that the process of packing optimization failed (four previous operations without rotation were used). The empty space decreased, but

has not disappeared. For the same initial packing an optimal solution was obtained after rotation by 180° and using scaling and displacement operations (Figure 3d).

a) initial packing b) rotation by 90o

c) rotation by 180o d) rotation by 270o

Fig. 6. Example of packing rotation by 90o, 180o or 270o

In the case, when the empty space is on the right or

at the top of the floorplan, the results can be worse after rotating the layout. Therefore, the constructional algorithm is executed starting from initial and three rotated packings and chooses solution with the smallest value of DS as the final solution. 6. Experimental Analysis

Experiments were based on MCNC benchmarks available in the literature [22]. Four benchmarks were chosen (the value in the brackets is a number of blocks): apte (9), xerox (10), hp (11), ami33 (33). For all of these, constraints were determined kmin = 0.5 and kmax = 2.0. For the experiments a computer was used with the following parameters: Windows 7, Intel(R) Core(TM) i7 CPU Q 840 @ 1.87GHz and 8 GB RAM.

Both simulated annealing algorithm (SA) (see section 4) and constructional algorithm (CA) (see section 5) require initial sequence-pair. This pair was generated randomly and was the same for both algorithms. The experiments were performed also for the algorithm, which combines the SA and CA (marked as SA+CA). In this algorithm, solutions obtained from the SA algorithm became the initial solutions for the CA algorithm.

10 different initial packings were randomly generated. For every one of these, we executed each algorithm separately: SA, CA and SA+CA. A constructional algorithm was run once for a single initial packing, because of its determinist operating. The SA algorithm was executed 50 times for every initial packing. The best from 50 results was chosen, which was considered as a final solution. For each trial the following parameters of simulated annealing were used

The algorithm also works the same for horizontal orientation.

a) initial packing b) packing after the elimination of

the deadlocked empty spaces

Fig. 4. Example of eliminating deadlocked empty spaces between blocks 9, 8, 0, 1 and 7, 1, 2, 4

a) initial packing b) packing without rotation

Fig. 5. Example of optimization without rotation 5.4 Elimination of Deadlocked Empty Spaces

In some cases there are deadlocked empty spaces between four adjacent blocks, e.g. empty space between blocks 9, 8, 1 and 0 or 7, 1, 2 and 4 in Figure 4. Such a situation cannot be solved using the previous operations, because none of the blocks can be displaced or scaled. Any action would cause an overlap, which is not acceptable.

Optimization starts from spreading the blocks apart. For the example in Figure 4, the following operations will be performed: block 9 is expanded to the width of block 8 to fill in the empty space. At the same time, all blocks on the right of block 9 are shifted to the right by the resulting difference, so that they do not overlap (block 9 is a little different in its width from block 8 which is a consequence of other operations performed later). Analogically, in the second case, block 1 is expanded to the width of block 7 and block 2 is moved to the right so that it does not overlap with the expanded block 1.

5.5 Layout Rotation

The four previous operations do not solve the problem, when the empty space is located in the middle of the layout or close to the origin of coordinate system (see an example of the described packing – Figure 5). In this case, the empty space is moved as close to the upper right corner of floorplan as possible by rotating the whole layout. Rotation is made in clockwise direction, around the origin of coordinate system by 90° or a multiple of it.

Figure 5 shows that the process of packing optimization failed (four previous operations without rotation were used). The empty space decreased, but

has not disappeared. For the same initial packing an optimal solution was obtained after rotation by 180° and using scaling and displacement operations (Figure 3d).

a) initial packing b) rotation by 90o

c) rotation by 180o d) rotation by 270o

Fig. 6. Example of packing rotation by 90o, 180o or 270o

In the case, when the empty space is on the right or

at the top of the floorplan, the results can be worse after rotating the layout. Therefore, the constructional algorithm is executed starting from initial and three rotated packings and chooses solution with the smallest value of DS as the final solution. 6. Experimental Analysis

Experiments were based on MCNC benchmarks available in the literature [22]. Four benchmarks were chosen (the value in the brackets is a number of blocks): apte (9), xerox (10), hp (11), ami33 (33). For all of these, constraints were determined kmin = 0.5 and kmax = 2.0. For the experiments a computer was used with the following parameters: Windows 7, Intel(R) Core(TM) i7 CPU Q 840 @ 1.87GHz and 8 GB RAM.

Both simulated annealing algorithm (SA) (see section 4) and constructional algorithm (CA) (see section 5) require initial sequence-pair. This pair was generated randomly and was the same for both algorithms. The experiments were performed also for the algorithm, which combines the SA and CA (marked as SA+CA). In this algorithm, solutions obtained from the SA algorithm became the initial solutions for the CA algorithm.

10 different initial packings were randomly generated. For every one of these, we executed each algorithm separately: SA, CA and SA+CA. A constructional algorithm was run once for a single initial packing, because of its determinist operating. The SA algorithm was executed 50 times for every initial packing. The best from 50 results was chosen, which was considered as a final solution. For each trial the following parameters of simulated annealing were used

Page 42: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 41

(determined during experiments): Tp = 400, ϵ = 0.001, α = 0.999.

Experiment 1

The simulated annealing algorithm requires hard-blocks to operate correctly. Hard-blocks were obtained from soft-blocks by setting three constant values of k = 0.5; 1.0; 2.0, so the first stage of the algorithm is to create hard-blocks according to these three values. During the first experiment, results were obtained and compared for three algorithms (SA, CA, SA+CA) applied to 10 initial packings generated randomly and for three values of k ratio.

Bolded values in Table 1 are the best dead-space (DS) values for a particular benchmark, the same algorithm and three chosen values of k. The simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA) are the most effective for blocks created for k = 1.0, that is for square blocks (4 out of 36 values turned out to be better for k = 0.5). Table 1. Comparison of results obtained at different values of k for the simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA). The table contains minimum, maximum and average percentage value of the DS criterion for 10 different initial solutions. BIS (Best Initial Solution) lines include values of the DS obtained for the initial solution which gave the best result

Benc

hmark

k = 0.5 k = 1.0 k = 2.0

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA+CA [%]

CA [%]

apte

BIS 49.83 6.93 7.80 12.22 11.93 1.32 16.15 15.72 5.29

min 24.96 6.93 7.80 4.93 1.94 1.32 16.15 11.34 5.29

avg. 44.02 19.97 20.90 13.07 9.82 5.46 39.95 21.67 30.56

max 55.45 26.98 44.38 20.07 15.18 8.18 59.88 34.38 49.83

xero

x

BIS 44.73 27.87 11.40 37.75 5.21 1.45 41.37 28.29 7.29

min 25.63 16.67 11.40 25.36 5.21 1.45 25.65 11.36 7.29

avg. 35.75 22.82 24.25 43.73 16.63 15.45 46.10 16.95 28.06

max 45.84 31.33 49.99 58.65 32.55 28.12 66.77 32.56 53.35

hp

BIS 47.42 13.82 5.39 27.68 7.41 6.49 41.64 8.78 7.77

min 28.55 13.53 5.39 27.68 7.41 6.49 20.97 8.78 7.77

avg. 43.58 23.45 16.35 43.23 18.90 12.16 45.04 30.41 25.75

max 59.66 45.50 34.14 69.87 30.71 17.02 56.87 56.87 46.34

ami3

3

BIS 131.71 30.54 11.55 92.23 40.10 10.33 136.89 38.31 19.88

min 111.93 23.09 11.55 80.26 16.28 10.33 101.13 26.20 19.88

avg. 145.69 31.25 23.42 94.33 23.62 19.18 136.68 32.75 33.45

max 188.34 45.68 41.54 116.99 40.10 32.77 189.47 38.31 55.48

The problem is to find such a soft-blocks packing that minimizes the value of the DS criterion (3). The perfect result would be the value DS = 0%. A similar result was not obtained for any benchmark. The reason for this could be that the results are highly dependent on the results of the initial packing generating algorithm (in this case, packing is generated randomly). If the method of initial packing generation is changed, one would supposedly obtain a better result. Moreover, it is not determined that the best result produced by the SA algorithm is the most effectively minimized by the CA algorithm, e.g. for k = 0.5 of the benchmark xerox the algorithm SA+CA improved the result from 44.73% to 27.87% (BIS line – see description of Table 1) and for the sample apte from 49.83% to 6.93%. To conclude, a worse initial solution can give a better final result (as in case of simulated annealing). Experiment 2

In this experiment, is compared the average percentage value of DS and average time for the considered algorithms and the best algorithms known from the literature ([23], [1]). According to the results obtained from the previous experiment, k = 1.0 was set to produce hard-blocks from soft-blocks.

Table 2. Comparison of the following algorithms: simulated annealing (SA), constructional algorithm to minimize block packing (CA) and combined algorithm (SA+CA) to the algorithm suggested in [23] (Lagrangian) and the branch and bound algorithm [1] (CompaSS). The column Initial Solution (IS) shows an average value of the DS for 10 initial packings

Benc

hmark

IS [%]

SA SA + CA CA Lagrangian CompaSS

DS [%]

t [s]

DS [%]

t* [ms]

DS [%]

t [ms]

DS [%]

t [s]

DS [%]

t [ms]

apte

99.19 13.07 19.22 9.82 19 5.46 25 0.54 53.0 0.75 50

xero

x

49.42 43.73 23.68 16.63 146 15.45 20 0.40 71.6 0.00 35

hp 119.20 43.23 32.13 18.90 20 12.16 32 1.40 107.3 0.00 42

ami3

3

141.53 94.33 232.93 23.62 261 19.18 118 4.30 774.6 0.00 883

* Time of operation refers only to the constructional algorithm

According to data included in Table 2, it is seen that

the constructional algorithm (CA) significantly improves (minimizes) both initial packing (DS decreased 18 times compared to the initial packing – sample file apte) and packing provided by the simulated annealing algorithm (DS decreased by 2.5 times – sample file xerox). The constructional algorithm is faster than the CompaSS algorithm on the same computer system, and much faster than simulated annealing based on Lagrange relaxation (results calculated on computer with processor Pentium III 600-MHz). It was observed that the execution time of the CA algorithm is up to 7.5 times less in comparison to, for example, the CompaSS algorithm. Unfortunately, effective values of the DS for solutions given by the CA algorithm are worse than those produced by algorithms from the literature. In conclusion, the considered constructional algorithm,

(determined during experiments): Tp = 400, ϵ = 0.001, α = 0.999.

Experiment 1

The simulated annealing algorithm requires hard-blocks to operate correctly. Hard-blocks were obtained from soft-blocks by setting three constant values of k = 0.5; 1.0; 2.0, so the first stage of the algorithm is to create hard-blocks according to these three values. During the first experiment, results were obtained and compared for three algorithms (SA, CA, SA+CA) applied to 10 initial packings generated randomly and for three values of k ratio.

Bolded values in Table 1 are the best dead-space (DS) values for a particular benchmark, the same algorithm and three chosen values of k. The simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA) are the most effective for blocks created for k = 1.0, that is for square blocks (4 out of 36 values turned out to be better for k = 0.5). Table 1. Comparison of results obtained at different values of k for the simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA). The table contains minimum, maximum and average percentage value of the DS criterion for 10 different initial solutions. BIS (Best Initial Solution) lines include values of the DS obtained for the initial solution which gave the best result

Benc

hmark

k = 0.5 k = 1.0 k = 2.0

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA+CA [%]

CA [%]

apte

BIS 49.83 6.93 7.80 12.22 11.93 1.32 16.15 15.72 5.29

min 24.96 6.93 7.80 4.93 1.94 1.32 16.15 11.34 5.29

avg. 44.02 19.97 20.90 13.07 9.82 5.46 39.95 21.67 30.56

max 55.45 26.98 44.38 20.07 15.18 8.18 59.88 34.38 49.83

xero

x

BIS 44.73 27.87 11.40 37.75 5.21 1.45 41.37 28.29 7.29

min 25.63 16.67 11.40 25.36 5.21 1.45 25.65 11.36 7.29

avg. 35.75 22.82 24.25 43.73 16.63 15.45 46.10 16.95 28.06

max 45.84 31.33 49.99 58.65 32.55 28.12 66.77 32.56 53.35

hp

BIS 47.42 13.82 5.39 27.68 7.41 6.49 41.64 8.78 7.77

min 28.55 13.53 5.39 27.68 7.41 6.49 20.97 8.78 7.77

avg. 43.58 23.45 16.35 43.23 18.90 12.16 45.04 30.41 25.75

max 59.66 45.50 34.14 69.87 30.71 17.02 56.87 56.87 46.34

ami3

3

BIS 131.71 30.54 11.55 92.23 40.10 10.33 136.89 38.31 19.88

min 111.93 23.09 11.55 80.26 16.28 10.33 101.13 26.20 19.88

avg. 145.69 31.25 23.42 94.33 23.62 19.18 136.68 32.75 33.45

max 188.34 45.68 41.54 116.99 40.10 32.77 189.47 38.31 55.48

The problem is to find such a soft-blocks packing that minimizes the value of the DS criterion (3). The perfect result would be the value DS = 0%. A similar result was not obtained for any benchmark. The reason for this could be that the results are highly dependent on the results of the initial packing generating algorithm (in this case, packing is generated randomly). If the method of initial packing generation is changed, one would supposedly obtain a better result. Moreover, it is not determined that the best result produced by the SA algorithm is the most effectively minimized by the CA algorithm, e.g. for k = 0.5 of the benchmark xerox the algorithm SA+CA improved the result from 44.73% to 27.87% (BIS line – see description of Table 1) and for the sample apte from 49.83% to 6.93%. To conclude, a worse initial solution can give a better final result (as in case of simulated annealing). Experiment 2

In this experiment, is compared the average percentage value of DS and average time for the considered algorithms and the best algorithms known from the literature ([23], [1]). According to the results obtained from the previous experiment, k = 1.0 was set to produce hard-blocks from soft-blocks.

Table 2. Comparison of the following algorithms: simulated annealing (SA), constructional algorithm to minimize block packing (CA) and combined algorithm (SA+CA) to the algorithm suggested in [23] (Lagrangian) and the branch and bound algorithm [1] (CompaSS). The column Initial Solution (IS) shows an average value of the DS for 10 initial packings

Benc

hmark

IS [%]

SA SA + CA CA Lagrangian CompaSS

DS [%]

t [s]

DS [%]

t* [ms]

DS [%]

t [ms]

DS [%]

t [s]

DS [%]

t [ms]

apte

99.19 13.07 19.22 9.82 19 5.46 25 0.54 53.0 0.75 50

xero

x

49.42 43.73 23.68 16.63 146 15.45 20 0.40 71.6 0.00 35

hp 119.20 43.23 32.13 18.90 20 12.16 32 1.40 107.3 0.00 42

ami3

3

141.53 94.33 232.93 23.62 261 19.18 118 4.30 774.6 0.00 883

* Time of operation refers only to the constructional algorithm

According to data included in Table 2, it is seen that

the constructional algorithm (CA) significantly improves (minimizes) both initial packing (DS decreased 18 times compared to the initial packing – sample file apte) and packing provided by the simulated annealing algorithm (DS decreased by 2.5 times – sample file xerox). The constructional algorithm is faster than the CompaSS algorithm on the same computer system, and much faster than simulated annealing based on Lagrange relaxation (results calculated on computer with processor Pentium III 600-MHz). It was observed that the execution time of the CA algorithm is up to 7.5 times less in comparison to, for example, the CompaSS algorithm. Unfortunately, effective values of the DS for solutions given by the CA algorithm are worse than those produced by algorithms from the literature. In conclusion, the considered constructional algorithm,

(determined during experiments): Tp = 400, ϵ = 0.001, α = 0.999.

Experiment 1

The simulated annealing algorithm requires hard-blocks to operate correctly. Hard-blocks were obtained from soft-blocks by setting three constant values of k = 0.5; 1.0; 2.0, so the first stage of the algorithm is to create hard-blocks according to these three values. During the first experiment, results were obtained and compared for three algorithms (SA, CA, SA+CA) applied to 10 initial packings generated randomly and for three values of k ratio.

Bolded values in Table 1 are the best dead-space (DS) values for a particular benchmark, the same algorithm and three chosen values of k. The simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA) are the most effective for blocks created for k = 1.0, that is for square blocks (4 out of 36 values turned out to be better for k = 0.5). Table 1. Comparison of results obtained at different values of k for the simulated annealing algorithm (SA), constructional algorithm (CA) and a combination of the two (SA+CA). The table contains minimum, maximum and average percentage value of the DS criterion for 10 different initial solutions. BIS (Best Initial Solution) lines include values of the DS obtained for the initial solution which gave the best result

Benc

hmark

k = 0.5 k = 1.0 k = 2.0

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA +CA [%]

CA [%]

SA [%]

SA+CA [%]

CA [%]

apte

BIS 49.83 6.93 7.80 12.22 11.93 1.32 16.15 15.72 5.29

min 24.96 6.93 7.80 4.93 1.94 1.32 16.15 11.34 5.29

avg. 44.02 19.97 20.90 13.07 9.82 5.46 39.95 21.67 30.56

max 55.45 26.98 44.38 20.07 15.18 8.18 59.88 34.38 49.83

xero

x

BIS 44.73 27.87 11.40 37.75 5.21 1.45 41.37 28.29 7.29

min 25.63 16.67 11.40 25.36 5.21 1.45 25.65 11.36 7.29

avg. 35.75 22.82 24.25 43.73 16.63 15.45 46.10 16.95 28.06

max 45.84 31.33 49.99 58.65 32.55 28.12 66.77 32.56 53.35

hp

BIS 47.42 13.82 5.39 27.68 7.41 6.49 41.64 8.78 7.77

min 28.55 13.53 5.39 27.68 7.41 6.49 20.97 8.78 7.77

avg. 43.58 23.45 16.35 43.23 18.90 12.16 45.04 30.41 25.75

max 59.66 45.50 34.14 69.87 30.71 17.02 56.87 56.87 46.34

ami3

3

BIS 131.71 30.54 11.55 92.23 40.10 10.33 136.89 38.31 19.88

min 111.93 23.09 11.55 80.26 16.28 10.33 101.13 26.20 19.88

avg. 145.69 31.25 23.42 94.33 23.62 19.18 136.68 32.75 33.45

max 188.34 45.68 41.54 116.99 40.10 32.77 189.47 38.31 55.48

The problem is to find such a soft-blocks packing that minimizes the value of the DS criterion (3). The perfect result would be the value DS = 0%. A similar result was not obtained for any benchmark. The reason for this could be that the results are highly dependent on the results of the initial packing generating algorithm (in this case, packing is generated randomly). If the method of initial packing generation is changed, one would supposedly obtain a better result. Moreover, it is not determined that the best result produced by the SA algorithm is the most effectively minimized by the CA algorithm, e.g. for k = 0.5 of the benchmark xerox the algorithm SA+CA improved the result from 44.73% to 27.87% (BIS line – see description of Table 1) and for the sample apte from 49.83% to 6.93%. To conclude, a worse initial solution can give a better final result (as in case of simulated annealing). Experiment 2

In this experiment, is compared the average percentage value of DS and average time for the considered algorithms and the best algorithms known from the literature ([23], [1]). According to the results obtained from the previous experiment, k = 1.0 was set to produce hard-blocks from soft-blocks.

Table 2. Comparison of the following algorithms: simulated annealing (SA), constructional algorithm to minimize block packing (CA) and combined algorithm (SA+CA) to the algorithm suggested in [23] (Lagrangian) and the branch and bound algorithm [1] (CompaSS). The column Initial Solution (IS) shows an average value of the DS for 10 initial packings

Benc

hmark

IS [%]

SA SA + CA CA Lagrangian CompaSS

DS [%]

t [s]

DS [%]

t* [ms]

DS [%]

t [ms]

DS [%]

t [s]

DS [%]

t [ms]

apte

99.19 13.07 19.22 9.82 19 5.46 25 0.54 53.0 0.75 50

xero

x

49.42 43.73 23.68 16.63 146 15.45 20 0.40 71.6 0.00 35

hp 119.20 43.23 32.13 18.90 20 12.16 32 1.40 107.3 0.00 42

ami3

3

141.53 94.33 232.93 23.62 261 19.18 118 4.30 774.6 0.00 883

* Time of operation refers only to the constructional algorithm

According to data included in Table 2, it is seen that

the constructional algorithm (CA) significantly improves (minimizes) both initial packing (DS decreased 18 times compared to the initial packing – sample file apte) and packing provided by the simulated annealing algorithm (DS decreased by 2.5 times – sample file xerox). The constructional algorithm is faster than the CompaSS algorithm on the same computer system, and much faster than simulated annealing based on Lagrange relaxation (results calculated on computer with processor Pentium III 600-MHz). It was observed that the execution time of the CA algorithm is up to 7.5 times less in comparison to, for example, the CompaSS algorithm. Unfortunately, effective values of the DS for solutions given by the CA algorithm are worse than those produced by algorithms from the literature. In conclusion, the considered constructional algorithm,

Page 43: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles42

despite the fact that it gives solutions with worse DS, is faster than the other algorithms. This is the reason why it may be applicable to chips, which consist of a large number of elements where time is a considerable factor. It may also be used as an algorithm to improve packing results produced by an algorithm, which minimizes the length of interconnections [2]. 7. Summary

The constructional algorithm CA described in this paper, significantly decreases the value of the DS criterion, and consequently, reduces the overall area of the floorplan for all tested benchmarks. It should be emphasized that the best results are obtained for those layouts that consist of square blocks in their initial packing (k = 1.0). A great advantage of the algorithm is a short execution time and ability to start from any initial packing represented in Cartesian coordinate system. The proposed algorithm may be used to improve results obtained from other algorithms. AUTHORS Marcin Iwaniec*, Adam Janiak – Institute of IT, Automated Control Engineering and Robotics, Wrocław University of Technology, Z. Janiszewskiego 11/17, 50-372 Wrocław, Poland, [email protected], [email protected] *Corresponding author

REFERENCES [1] Chan H., Markov I., “Practical slicing and non-

slicing block-packing without simulated Annealing”, In: Proc. Great Lakes Symposium on VLSI (GLSVLSI), 2004, pp. 282–287.

[2] Chen T.C., Chang Y.W., Lin S.C., “IMF: Interconnect-driven multilevel floorplanning for large-scale building-module designs. In: Proc. ICCAD'05, ACM/IEEE, 2005, pp. 159–164.

[3] Chen T.C., Chang Y.W. Modern floorplanning based on B*-tree and fast simulated Annealing”, IEEE Trans. on CAD, vol. 25, no. 4, 2006, pp. 637–650.

[4] Chen S., Yoshimura T., “Fixed-outline floorplan-ning: block-position enumeration and a ew method for calculating area costs”, IEEE Trans. Computer – Aided Des. Integrated Circuits Syst., vol. 27, no. 5, 2008, pp. 858–871.

[5] Chen J., Zhu W., Ali M.M., “A hybrid simulated annealing algorithm for nonslicing VLSI floorplanning”, IEEETrans. Syst., Man, and Cybernetics, Part C: Appl. Rev., vol. 41, no. 4, 2011, pp. 544–553.

[6] Chiang C.W., “Ant colony optimization for VLSI floorplanning with clustering constraint”, J. Chin. Inst. Ind. Eng., vol. 26, no. 6, 2009, pp. 440–448.

[7] Hayes J.P., Murray B.T., Testing ICs: getting to the core of the problem, IEEE Computer Magazine, no. 11, 1996, pp. 32–38.

[8] Itoga H., Kodama C., Fujiyoshi K., “A graph based soft module handling in floorplan”, IEICE Trans. Fundam. Electron. Commun. Comput. Sci., E88-A(12), 200, 5, pp. 3390–3397.

[9] Janiak A., Kozik A., Lichtenstein M., “New perspectives in VLSI design automation: deterministic packing by Sequence Pair”, Annals of Operations Research, Springer Netherlands, 179, 2010, pp. 35–56

[10] Kang M., Dai W.-M., “General floorplanning with l-shaped, t-shaped and soft blocks based on bounded slicing grid structure”. In: Design Automation Conference 1997. Proceedings of the ASP-DAC ’97. Asia and South Pacific, 1997, pp. 265–270.

[11] Kirkpatrick S., Gelatt C., Vecchi M., “Optimization by Simulated Annealing”, Science, no. 220(4598), 1983.

[12] Liu J., Zhong W.C., Jiao L.C., Li X., “Moving block sequence and organizational evolutionary algorithm for general floorplanning with arbitrarily shaped rectilinear blocks”, IEEE Trans. Evol. Comput., vol. 12, no. 5, 2008, pp. 630–646.

[13] Ma Y., Hong X., Dong S., Cai Y., Cheng C.-K., Gu J., Floorplanning with abutment constraints based on corner block list, Integration, the VLSI Journal, 31(1), 2001, pp. 65–77.

[14] Murata H., Fujiyoshi K., Nakatake S., Kajitani Y., “VLSI module placement based on rectangle-packing by the sequence pair”, IEEE Transaction on Computer Aided Design of Integrated Circuits and Systems, vol. 15, no. 12, 1996.

[15] Otten R.H.J.M., van Ginneken L.P.P.P., “Floorplan design using simulated annealing”. In: Proc. Intl. Conf. on CAD, 1984, pp. 96–98.

[16] Rebaudengo M., Reorda M.S., “GALLO: genetic algorithm for floorplan area optimization”, IEEE Trans. Comput. – Aided Des. Integrated Circuits Syst., vol. 15, no. 8, 1996, pp. 943–951.

[17] Valenzuela C.L., Wang P.Y., “VLSI Placement and Area Optimization Using a Genetic Algorithm to Breed Normalized Postfix Expressions”, IEEE Transactions on Evolutionary Computation, vol. 6, no. 4, 2002, pp. 390–401.

[18] Wang H.Y., Hu K., Liu J., Jiao L.C., „Multiagent evolutionary algorithm for floorplanning using moving block sequence”, IEEE Congr. Evol. Comput., 2007, pp. 4372–4377.

[19] Wong D.F., Liu C.L., “A new algorithm for floorplan design”. In: Proc. Design Autom. Conf., 1986, pp. 101–107.

[20] Xiaoping T., Ruiqi T., Wong D.F., “Fast evaluation of sequence pair in block placement by longest common subsequence computation”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 20, no. 12, 2001, pp. 1406–1413.

[21] Xiong E.S.J., Wong Y.-C., He L., “Constraint driven i/o planning and placement for chip-package co-design”. In: Proc. Asia and South Pacific Design Autom. Conf., 2006, pp. 207–212.

[22] Yang S., Logic synthesis and optimization benchmarks, Microelectronics Center of North Carolina, Research Triangle Park, N.C., Tech., 1991.

[23] Young F., Chu C., Luk W., Wong Y., “Handling soft modules in general nonslicingfloorplan using lagrangian relaxation”, IEEE Transactions on Computer-Aided Design of Integrated Circuits and Systems, vol. 20, no. 5, 2001, pp. 687–692.

Page 44: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 43

A Comparison of Buffer Sizing Techniques in the Critical Chain Method. Case Study

Anna Slusarczyk, Dorota Kuchta, Philip Verhulst, Willem Huyghe, Koen Lauryssen, Torrino Debal

Submitted 28th July 2012; accepted 16th December 2012

Abstract: The presented paper is an attempt to evaluate the ap-plication of different buffer sizing methods (within the critical chain approach) to a real-life project, in order to show that it is important to choose consciously one of the existing methods. The implementation of the differ-ent approaches to the buffer sizing procedure resulted in eight unique schedules. Once all the methods were implemented, the authors of this paper stumbled upon 2 inconveniences (slack vs. buffer, splitting up buffers). Funding solution for these handicaps caused obtaining eight more schedules. In order to evaluate all the deter-mined solutions, the authors decided to establish a simu-lation study. A random dataset was generated of 2000 observations using the mean and variance calculated for the probability distribution determined by three possible durations (optimistic, expected and pessimistic) given by the project team members. To validate the perfor-mance of different methods, two penalty systems were designed. The test results indicate which specific buffer sizing technique performs best in the presented case, taking into account the attitude of the decision maker. The simulation study is the proposal of a method to be applied in each case of the critical chain method applica-tion, because it makes the use of the critical chain meth-od more effective.

Keywords: project management, critical chain, buffers, risk management

1. IntroductionResearch concerning the buffers, determining their

size and positioning was the natural consequence of the presentation in 1997 by E.M. Goldratt critical chain method [1], which is an application of his theory of con-straints to the field of project management. The inter-disciplinary nature of the method and its revolutionary recommendations for effective project management determined a broad interest in it by both research-ers and practitioners, especially as traditional project management methods [5] often turn off to be inef-fective. One of the main elements of the critical chain method is getting information from the project execu-tors concerning safety time hidden in the estimation of each project task duration and accumulate them in the form of buffers (just a few) placed at the end of selected paths. This approach aims to ensure the minimization of the risk (understood here as the probability) of over-

running the project due date, while minimizing this due date, thus the project makespan.

Let us summarize briefly the idea of the Critical Chain Method by means of the project network ex-ample presented in Figure 1:

Fig. 1. Example of a project network. Source: own elab-oration

Each of the four activities in Figure 1 has two du-ration time estimations, di95, di50, respectively the so-called safe and aggressive estimate, where the numbers 95 and 50 stand for the probability of keep-ing the estimate and i for the activity number and the difference between the two estimates is called safety time. Out of the two estimates, according to Goldratt, in the classical approach only the safe (greater) one is used, the aggressive one remaining unrevealed. Thus in the classical approach, the project from Figure 1 would have the estimated completion time (deadline) equal to 16 – the length of the path A, B, D with the aggressive estimates.

In the Critical Chain Method both estimates are revealed, the longest path (taking into account the availability of resources) based on the aggressive esti-mates is treated as the basis for the calculation of the deadline, but it is corrected by the length of a project buffer, which is considerably smaller than the differ-ence between the longest path based on the safe es-timates and that based on the aggressive estimates. However, the exact formula for the buffer may vary. In the project from Figure 1 the longest path based on the aggressive estimates has the length 9, the dif-ference between the two longest paths is 7, thus the project buffer might be equal to 3, 5 (taking half of the difference is one of the possible approaches). Then the planned deadline of the project is deter-mined to be equal to 12, 5. Apart from the project buf-

Page 45: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles44

fer (abbreviated PB), the shorter paths of the project network (called feeding chains) are protected by so called feeding buffers (abbreviated FB), whose length is also determined by varying formula. It needs to be said that generally the feeding chains are advised to be scheduled as late as possible, taking into account the feeding buffers. Figure 2 presents the project in question with the buffers:

Fig. 2. Example of a project network – the Critical Chain Method. Source: own elaboration

As mentioned above, there are many methods of determining the size of buffers, however it is not known which method should be applied in which case. Also the location of buffers may vary. The meth-ods proposed in scientific articles are described in very general terms (more as a philosophy) or they are based on the assumptions which are usually impossi-ble to verify (e.g. that the number of activity predeces-sors increases the risk of the activity delay or that it is better to take for the buffer size the arithmetic aver-age than the geometric one etc.). Each author asserts the effectiveness of their method while a few practical attempts to examine the different methods in a criti-cal way show that the intuitions of the authors are not always reflected in reality [11]. Additionally the pro-posed methods are very different and often contradic-tory. It is obvious that different methods lead to sig-nificantly different schedules, but there is no known method of decision making in terms of choosing the right schedule in a particular case.

The aim of the paper is to propose to the reader a way of choosing the right critical chain schedule for his or her given case. The hypothesis is that in case the critical chain method is used, the choice of the buffer sizing (and location) method is important. Now, if so (and the present paper delivers kind of jus-tification of the hypothesis, showing that it is true in one real-world case), the user needs a tool to choose the right method. In this paper we show how it can be done, by the use of simulation methods based on the EXCEL spreadsheet. The aim of the paper is not to deliver an overall evaluation of all the existing buf-fer sizing and location methods. Such an evaluation is still an open question, the hypothesis is that each of them may have advantages and disadvantages which depend on the very project that is being scheduled.

The present paper evaluates an application of dif-ferent buffer sizing methods to a real-life project, de-scribed below, or rather to its part: the strict phase of e-office solution implementation in the district Wrocław local government.

2. Case descriptionThe project in question is called ‘The develop-

ment of the ICT infrastructure in the region governed by the Wrocław district and its 7 subunits (Czernica, Jordanów Śląski, Kąty Wrocławskie, Kobierzyce, Miet-ków, Sobótka, Żórawina), as well as increasing avail-ability of e-services to citizens and businesses rep-resentatives from the region of the Wrocław district’. The localities mentioned here are shown in Figure 3:

Fig. 3. The localities included in the project in consider-ation. Source: own elaboration

The main objective of the project is to integrate and improve the functionality of the ICT infrastruc-ture and to introduce an integrated information sys-tem supporting the cooperation between individual units such as the district of Wroclaw and its 7 local subunits, as well as increasing the availability of elec-tronic administrative services for the district of Wro-claw citizens.

The following sub-objectives were defined: – Optimizing the expenses of offices, increasing the

effectiveness of the unit management – Introducing electronic services for citizens and in-

vestors – Improving the document flow by means of the in-

troduction of an electronic document flow system– Improving data archiving in an electronic way – Facilitating contact with offices and access to pub-

lic information – Improving cooperation and communication be-

tween offices and outsource units.The population of the region governed by the dis-

trict Wrocław is approximately 107,000. This is the third largest local government unit in Lower Silesia. The direct vicinity of the Wrocław agglomeration has a positive impact on the development of this area. This region is also preferable from the investment point of view. The best proof is the large concentration of busi-nesses in the so-called Bielański Node.

The project scope is to purchase computers and software, aimed at increasing the efficiency of the of-

Page 46: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 45

ficers work in the local government of the district Wro-claw and its subunits offices, hence increasing the stan-dard of services offered to customers. The implement-ed solutions will enable citizens of the district Wroclaw to settle certain official matters on-line, through access to e-boxes in the offices. Furthermore, it is planned to introduce e-points in small villages and two customer service offices in Smolec and Gniechowice (Figure 3) to enable the use of the service by those customers who do not have permanent access to the Internet.

3. Literature overview Since the very introduction of the critical chain

method [1], the method has been being criticized [11] for having some significant gaps in its basic assump-tions. Many attempts to improve Goldratt solution initiated the development of new approaches concern-ing buffer sizing techniques. However, there exists no comparative analysis or evaluation of those different techniques. The authors of this paper chose eight tech-niques to be compared in one real world case. A brief overview of the analyzed approaches is presented be-low. Once the authors of this paper had determined all the buffer sizes and created a Gantt-chart for every technique, they stumbled upon 2 inconveniences of the analyzed approaches. In order to solve these prob-lems, the solution based on the approach proposed by Połoński and Pruszyński [9] was incorporated.

3.1. Cut and Paste 50% Method of GoldrattObviously the first introduced buffer sizing method

was one proposed by Goldratt himself in his Critical Chain novel [1]. This method assumes that the safety time incorporated in an activity duration has to be cut-off. Then this safety time is aggregated at the end of the critical chain in the form of project buffer and as a feeding buffer wherever any noncritical chain feeds into a critical chain. The size of both types of buffers is equal to 50% of aggregated safety time of the longest path feeding into the buffer. Since Goldratt did not specify any name for that approach, Tukel et al. [2] referred to it as “the Cut and Paste Method” (C&PM) and Leach[3] as “the 50% of the Chain Method”.

3.2. Adaptive Procedure with Density Tukel et al. [2], as a counterproposal to that of

Goldratt, introduce two adaptive buffer sizing meth-ods incorporating project characteristics and level of uncertainty in the buffer size determination. The first approach, Adaptive Procedure with Density (APD), as-sumes that for a given number of project activities, if the number of precedence relationships increases, it is more likely that delays will occur. In such a case a delay in a particular activity execution will have an impact on all the successors of this activity. Therefore, the big-ger number of precedence relationships, the bigger the buffers should be. This means that in APD the network complexity is reflected as a ratio of total number of pre-cedence relationships of the particular activity to the total number of activities.

The feeding and project buffers sizes will be calcu-lated in the following way:

(1)

J – total number feeding chains and critical chainsij

2 – variation of an activity i in a chain jNj – numer of the activities in chain j PREj – total number of precedence relationships

defined on a chain jBSj – size of the buffer protecting a chain j 3.3. Adaptive Procedure with Resource Tightness

The other method introduced by Tukel et al., the Adaptive Procedure with Resource Tightness [2], in order to reflect uncertainty in the feeding and critical chains incorporates in the buffer sizing process a fac-tor called by the authors “resource tightness”. The as-sumption was made that if the total resource usage is close to the total resource availability, it is more likely that delays will occur. Therefore, there should be larger buffers to avoid these delays. The resource utilization factor for each resource is an important parameter which is calculated as a ratio of the total resource usage and the total resource availability for each resource. Additionally, the chain standard deviation is computed assuming the applicability of the central limit theorem, which says that the mean duration of the chain is equal to the sum of the mean durations of activities making up the chain, and the variation of the chain is equal to the sum of the varia-tions of the activities making up the chain. Hence, the feeding and project buffer sizes will be calculated in the following way:

(2)

(3)

J – total number feeding chains and critical chain – standard deviation of a chain j

r – resource usage Rav – resource availability BSj – size of the buffer protecting chain j 3.4. Risk Class Assessment

This buffer sizing technique takes into account the fact that activities with a large variance will have pes-simistic or safe duration estimates much larger than the estimated average duration, whereas activities with a small variance will have pessimistic duration estimate close to the average duration estimates. Ac-tivities with a higher chance of having a large devia-tion from the average estimates should have a bigger buffer. The Risk Class Assessment technique calcu-lates the relative dispersion (RD = standard devia-tion/ average duration estimate) of an activity and uses it to assign an appropriate buffer size. The rela-tive dispersion is a measure of the uncertainty of an activity. The higher the relative dispersion, the higher the odds the activity will have a much larger duration than the average estimated duration, which impli-cates a higher uncertainty. The next step is to assign the activities to one of the four risk classes (A: very

Page 47: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles46

low uncertainty, B: low uncertainty, C: high uncertain-ty, D: very high uncertainty). Unfortunately the con-sulted sources did not provide any guideline on which ranges of relative dispersion fall within which risk class [6]. Therefore the following classification was used: RD range 0–0.13 = A; RD range 0.14 – 0.19 = B; RD range 0.20–0.25 = C; RD range 0.26–1 = D. Finally the average estimated duration of each activity is mul-tiplied with the proposed buffer size of its risk class to generate the appropriate buffer size. After the buffer size per activity is calculated, all the buffers from the activities in a feeding chain are summed up. The total buffer is then placed at the end of the feeding chain. Table 1 provides different proposed buffer sizes for a low safety (86%), medium safety (95%) and high safety (99%) method. The medium safety and high safety methods are used in this paper.

Table 1. Buffer sizes for different activity classes as per-centages of the chain length

Classification Low safety Median safety High safety

A 4% 8% 12%

B 12% 24% 36%

C 20% 40% 60%

D 28% 57% 85%

Source: own elaboration

3.5. Original RESMThe original Root Error Square Method uses only

the expected and pessimistic duration of a task in the project to determine the feeding buffers and the proj-ect buffer size. With uncertainty being the difference between the pessimistic and expected duration, stan-dard deviation of a task equals half of the uncertainty. The standard deviation of a chain equals the root of the squares of all the standard deviations of the tasks contained in this chain.

To set the feeding buffers, it suffices to take the double of the standard deviation of the feeding chain. The project buffer equals the double of the standard deviation of the critical chain [12].

3.6. Ashtiani RESMThe Ashtiani Root Error Square Method differs

from the original RESM method by one parameter. The standard deviation of individual task is calculated by dividing the uncertainty defined in section 3.5 by 1,3. As a consequence the buffers are much bigger with the Ashtiani method in comparison with the original meth-od. This method can be considered as very safe regard-ing to the possibility of exceeding buffers. If reliability is of major importance, this method is recommended [10].

3.7. SSQ MethodThe other approach for buffer sizing process was

proposed by Leach [3]. The Square Root of the Sum of the Squares (SSQ) is very similar to the original RESM method. The difference lies in extracting the buffer sizes from the standard deviation of the branches. This method assumes that the buffer size is equal to the

square root of the sum of the squares of the difference between the low risk estimate and mean estimate for each activity duration laying on the chain feeing into the buffer. The original RESM method doubles the stan-dard deviation to obtain the buffer size, while the SSQ method sets the standard deviation as being the buffer size. In case when the feeding chain splits, only the lon-gest chain or the largest result out of all these chains should be applied. Leach does not specify the mean-ing of the term ‘low risk estimate’. For the purposes of the presented analysis it will be assumed that ‘low risk estimate’ is equivalent to ‘pessimistic estimate’ in the three point-estimate PERT [7].

4. Project DataIn this chapter we will present the entry data that

we used in our study. The data was collected by means of document analysis and interviews. The duration times estimates were determined using various tech-niques advised in the literature concerning the criti-cal chain method.

4.1. Project NetworkThe project network is presented in Figure 4, with

the two duration estimates given under each activity node. The activity durations are presented in weeks. Since there are a few activities which can initiate the project, to simplify the overview, the dummy activity ‘Start’ was added. There is no dummy activity at the end of the network since a single activity (i.e. activity no. 15) terminates the project.

Fig. 4. Project network. Source: own elaboration

4.2. ResourceThe network comprises of 16 tasks for which the

durations and the resource requirements are listed in Table 2.

As it was mentioned before, 7 renewable resourc-es are used to complete the project. Their character-istics and information about availability is presented in Table 3. The information concerning task charac-teristics, their resource requirements, duration esti-mates, as well as resource description were collected by means of interviews with project team members.

Page 48: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 47

Table 3. Resources description

Resource type No. Description Availability Availability

period

1 Technician 6 all project

2 IT Expert 1 all project

3 Procurement officer 2 all project

4 Administrator 2 all project

5 HR specialist 2 all project

6 IT coach 1 all project

7 Software developer 5 all project

Source: own elaboration

Table 4. Critical chain and feeding chains description

Chain No. Tasks numbers Chain duration

CC 4,5,9,16,14,15 22

1 1 8

2 2,3 7

3 6,8 5

4 11, 10 5

5 7,12,13 6

Source: own elaboration

4.3. Critical Chain IdentificationFor the identification of the critical chain se-

quence, the application cc-MPulse was used. This is an extension of the MS Project software for CC/PM method implementation. The resource use analysis (Figure 5) was also performed to verify the results given by the software.

Both approaches resulted in the same conclusion: the project critical chain consists of the sequence of tasks 4-5-9-16-14-15 and the length of the critical chain is equal to 22 weeks (i.e. only part of the total project). Ad-ditionally five feeding chains were identified (Table 4).

4.4. Buffer Sizing Techniques – SchedulesAs was mentioned before, the authors of this paper

decided to analyze eight buffer sizing methods, de-scribed in the previous sections. The obtained sched-ules, one for each technique, are depicted in the form of Gantt charts (Figures 6, 7, 24, 25, 26, 27, 28). The figures show how the project makespan and buffer sizes change from one approach to the other. Below two example figures, APD method schedule (Figure 6) and APRT method schedule (Figure 7), are presented. The others can be found in the appendix attached at the end of this paper.

The red blocks represent the activities laying on the critical chain. The grey blocks represents the other ac-tivities. The dark green blocks stand for the project buf-

Table 2. Activities description (50% duration estimate and resource requirements)

No Description di50 r1 r2 r3 r4 r5 r6 r71 Expansion of network connections 8 3

2 Purchase of IT equipment 3 1 1

3 Installation of IT equipment 4 3

4 Matching the user requirements with Telecommunication Platform (TP) 3 1 1 1 1

5 Implementing 22 four types of database 6 2

6 Implementing electronic application forms 3 2

7 Establishment of Local Certification Centre 2 1

8 Public Information Bulletin (PIB), on-line register 2 2

9 The integration of the Telecommunication Platform, PIB, web site and HR Module 4 1 4

10 Time recording system 2 1 1

11 Implementing Human Resources Module 3 1 1

12 Purchase of digital signature 1 1 1

13 Safety System related to the implementation of the e-office 3 2 2

14 Pilot implementation of the Telecommunication Platform 3 1 1

15 Configuration and starting-up of hardware and software in all locations of the project 2 3 1 3

16 Training in application using 4 1 1

Source: own elaboration

Page 49: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles48

fer while the light green ones stand for the feeding buf-fers placed at the end of each feeding chain. The proj-ect due date obtained according to a particular method was marked with a red line at the end of each Gantt chart. Additionally it is important to say that breaks in the critical chain imposed by inserting feeding buffers where marked with orange circles. The analysis of the attached Gantt charts provides an overview of changes in the buffers sizes according to different techniques and the possible project due dates.

5. Buffer Location AdjustmentsOnce all the buffer sizes were created and a Gantt-

chart for every technique was ready, the authors stum-bled upon 2 inconveniences of the eight approaches. In the following paragraphs, these inconveniences will be described, explained where they are located in this particular case and how they were solved.

5.1. Slack vs. BufferThe first inconvenience is the fact that the original

methods do not take into account the slack available in every feeding chain while establishing the buffer sizes. This can result in delaying the tasks in parallel on the critical chain. This is explained by the fact that all tasks in the Critical Chain Method should be sched-uled as late as possible, including the critical chain (CC). When the buffer of a particular feeding chain exceeds the slack (of that feeding chain) available, the activity(s) on the CC in parallel will be delayed by the time difference of the slack and buffer.

Fig. 6. APD method schedule. Source: own elaboration

Fig. 5. Resource use profile. Source: own elaboration

Fig. 7. APRT method schedule. Source: own elaboration

Fig. 8. Slack value versus buffer size. Part of the project network. Source: own elaboration

Table 5. Buffer III description

No Buffer sizing technique Buffer no 3 size CC delay

1 Original RESM 3,61 2,61

2 Ashtiani RESM 5,55 4,55

3 Root Square 1,8 0,8

4 APD 2,25 1,25

5 APRT 4,23 3,23

6 High risk 2,52 1,52

7 Medium risk 1,68 0,68

8 Cut and paste 2,5 1,5

Source: own elaboration

Page 50: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 49

In the presented project, this was the case for the part of the network presented in Figure 8, where in-side the nodes we have the numbers of activities and beneath them the aggressive estimates and the safety times of the corresponding activities. The available slack is equal to 1 (= activity 5 (d5 = 6) – activity 8 (d8 =2) – activity 6 (d6 =3)) and the sizes of buffer III ac-cording to the different methods are given in Table 5. As it can be observed, the buffer according to every technique is bigger than the available slack. As a con-sequence activity 5, on the critical chain, will always be delayed (Table 5), which is not desirable. This de-lay is visualized in Gantt-chart for one of the methods (Figure 9).

As a solution, the following two changes are sug-gested: (1) put a maximum on the buffer size and as a maximum pick the slack available on that particular feeding chain. (2) Add the remaining of the feeding buffer to the project buffer [4]. This eliminates the de-lays of the activities on the critical chain and lowers the risk of the project being late. If the decision maker opted for keeping the same level of risk, he would not want to oversize the project buffer. Then he should only transfer a part of the remaining feeding buffer to the project buffer. However, which percentage should be chosen goes beyond the scope of this paper. By ap-

plying this modification to this example, for the same buffer sizing technique as in Figure 9, the modified Gantt-chart is shown in Figure 10.

5.2. Splitting Up Buffers

According to the assumption introduced by Goldratt, the buffer protecting a particular chain is placed at the end of that chain. Additionally, in this buffer all the safety time removed from the individual activities laying on a protected chain is accumulated. In the project in ques-tion, in the case of buffer no 5 this technique presents a problem. This inconvenience will be clarified now. For the better understanding of the presented below expla-nation, it is recommended to follow it together with the project network (Figure 3) and Figure 11. The feeding chain (7-12-13) links up with the critical chain at the end of activity 13 (Figure 11). Activity 13 can only start after activity 12 (feeding chain) and activity 9 (critical chain) are finished. Since activity 12 is scheduled as late as possible (it is one of the critical chain method as-sumption), it may occur that activity 9 (critical chain) has finished but activity 12 has not, which means activ-ity 13 cannot start yet, meanwhile activity 16 (critical chain) can start. In the worst possible case this means activity 16 (critical chain) can be finished when activ-ity 13 is not ready yet, and the critical chain will have to wait because of the delay in the feeding chain. This situa-tion needs to be avoided. A solution for this problem is to split the original buffer up into two buffers: one before (buffer 5.A) and one after (buffer 5.B) activity 13 (Figure 10)[9]. Another difficulty here is to decide what the ap-propriate buffer sizes for these two new buffers should be. To tackle this problem, the original total buffer size is multiplied by a factor which is calculated as follows in the case of buffer 5.A:

factor buffer 5.A = (variance(chain(7–12)) / (variance(chain(7-12)) + variance(act 13)) + (number of activities(chain(7–12))/(number of activities(chain(7–12)) + number of activities(chain(13)) ) + (use of resources(chain(7–12)) / (use of resources(chain(7–12) + use of resources(act 13)))/3.

Fig. 9. RESM method schedule.Source: own elaboration

Fig. 10. Modified RESM method schedule. Source: own elaboration

Fig. 11. Modified APD method schedule. Source: own elaboration

Page 51: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles50

This formula takes into account all sources of un-certainty and risk, concerning the total project dura-tion, in each part of the chain, and thus assigns the largest part of the original buffer to the part of the chain which carries the highest risk.

5.3. Buffer Sizing Techniques – Modified SchedulesThe Gantt-charts below are a summary of the

schedules with modifications. Again, for simplicity sake, only two figures will be presented below, name-ly the modified APD method schedule and the modi-fied APRT method schedule. The other figures can be found in the appendix at the end of this paper.

Fig. 12. Modified APRT method schedule. Source: own elaboration

The meaning of the blocks in the Gantt charts in Figures 11 and 12 is the same as before. The only thing that should be paid attention to, is the orange block, which represents the modified buffers.

6. Solution ApproachThe implementation of different methods of buffer

sizing resulted in obtaining eight unique schedules. In order to evaluate all the proposed solutions, the au-thors of this paper decided to establish a simulation study. The reason for this kind of approach is twofold. First, the analysis is made post-factum (i.e. the project is already finished) so there is no real-life evaluation possible. Second, every project is unique as well as ev-ery potential solution. Once decided to opt for a par-ticular schedule, there is no possibility of checking the potential result of implementing the other solution in real-life conditions. A simulation which takes into ac-count the project characteristics allows to evaluate all the possible scenarios and to propose such a buffer sizing and location method which for the very project in question ensures the best compromise between the protection against delay risk and the planned project completion time.

6.1. Creating Simulation Data Due to lack of adequate software to simulate the

application of different buffer sizing techniques, an algorithm had to be created in Microsoft EXCEL. For every task of the project an optimistic, expected and pessimistic duration were given. These estimates were obtained thanks to interviews with the team members responsible for a particular project task, conducted during the planning phase. The optimistic estimate is understood as the minimum possible time

required to accomplish a task. In the project in ques-tion it is identical with the aggressive estimate (d50). The expected duration is the estimate of the time re-quired to accomplish the task under the assumption that everything takes the usual course. In the project in question it is the same as the safe estimate (d95). As the last the pessimistic estimate was given. It is un-derstood as the maximum possible time required to perform a task. Furthermore, it is assumed that the task duration is defined by a beta distribution. With x being the optimistic duration, y the expected dura-tion and z the pessimistic duration, it is possible to calculate the mean and variance of these tasks using the following formulas [8].

(4)

(5)

With m being the mean and v the variance, it is possible to calculate the a and b of each task’s beta distribution using the following formulas [7].

(6)

(7)

With these values for a and b, 2000 random du-rations for each task are created according to its dis-tribution using the EasyFit software. The sample of 2000 durations is considered big enough to reduce incorrect outcomes as a result of random data gen-eration. This sample reflects 2000 project executions with various task durations.

6.2. Simulation Method Combining the duration of tasks from the same

chain, the total duration of the branch is determined. Using an Excel sheet it is calculated how often chain durations exceed the expected duration and, when exceeding, how deep these durations penetrate their buffers, both feeding and project buffers, using the formulas below.

IF (chain duration < expected chain duration)Þ“No problem” ;

ELSE Buffer penetration = (chain duration - expected chain duration /buffer size) * 100;

(8)

It suffices to count the number of times no prob-lem occurs and to categorize the percentage of pen-etration according to the selected degrees to create a histogram with the overall results. Penetration up to 33% of the feeding buffer is called “low penetra-tion”, from 34% to 66% is called ‘medium penetration’ and over 66% is called ‘high penetration’ (Figure 13).

Page 52: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 51

Durations bigger than the expected duration of the chain plus the buffer are labeled as “exceeding buffer”The obtained histogram gives a clear overview of the reliability of the corresponding buffer sizing and location method.

7. Buffer Sizing Techniques. ConclusionsIn the following paragraphs the reader finds a brief

overview of some relevant remarks and characteristics concerning the different buffer sizing techniques. There is one point requiring attention: the reader will notice that in case of some techniques there is a high risk as-sociated with buffer 5.A (i.e. the times it exceeded the buffer was very high). However, this does not cause a problem when buffer 5.B can cope with this delay (i.e. the time left in buffer 5.B is bigger than the amount of time by which buffer 5.A has been exceeded).

7.1. Overall Performance ComparisonTo compare the performance of all the techniques

two penalty systems are designed. One focalizes on low risk seeking project managers whose main ob-jective is reliable deadlines. A low risk seeker prefers a longer project planned completion time (deadline), thus a longer project buffer, so that the project dead-line is more probable to be kept. At the same time, he is very unhappy about not keeping the longer dead-line. The other penalty system is in favor of a high risk seeker. He wants to be competitive and prefers a shorter project completion time (deadline), thus a shorter project buffer, and at the same time he is readier (although of course not quite happy) to ac-cept to exceed this shorter deadline. Penalty names in Table 6 are as follows: the penalty “over buffer” penal-izes for every week the project duration exceeds the project critical chain plus the project buffer, thus the due date, while the penalty “Due date” is the penalty the project manager gets for every week of the project buffer (Table 6).

Two cases were considered: that of the low risk seeker, whose main objective are reliable deadlines, a high penalty of 40 000 per week exceeding the proj-ect buffer is added and a relatively small penalty of 500 per week of project buffer. For the high risk seeker, who prefers the project planned durations, thus proj-ect buffers, to be short, but is ready to accept a higher risk of exceeding the project buffer, these costs were 20 000 and 1 000, respectively. The numbers them-selves are fictitious. Only the proportion between the penalties is of any relevance.

The first case is one for a low risk seeker. The best technique for a low risk seeker to use is the Original RESM method. The high risk class method takes the second place, just behind the Original RESM method. The medium risk class method performs similarly as the APD and the Root Square methods, which are the third best ones to be considered. Then the Ashtiani RESM results in a higher total cost than the previously mentioned techniques but a lower total cost than the other ones. Next, the Copy and Paste method gives a lower total cost than the APRT method which is the worst method for the low risk seeker.

As for the second case: the high risk seeker has a lower cost assigned to exceeding the project buffer but a higher cost associated with long project buffer, or poor efficiency. The most appropriate method to use for the high risk seeker is the root square meth-od. The medium risk class method performs almost as good as the APD method. The Original RESM and the high risk class method are roughly at the same level and take the third position. The Ashtiani RESM is next to be considered as it results in a higher total cost than the previous methods. Next it is the Cut and Paste techniques. Finally the APRT method is by far the worst one to use for a high risk seeker as it results in a much higher total cost.

Fig. 14. Penalties overview. Modified buffer sizing tech-niques. Source: own elaboration

7.2. Cut and Paste 50% GoldrattThe Cut and Paste Method can be considered as

a very safe and reliable buffer positioning and siz-ing method. Along with APRT and the Ashtiani RESM method the large buffer sizes proposed by this meth-od almost guarantee a zero percent chance of project delay. Consequently, this is an expensive method. Be-cause of a similar reliability and a higher cost com-pared to the Ashtiani RESM method, this method should not be used for this project (Figure 15).

Table 6. Penalties description

Penalty Low Risk Penalty High Risk

Over buffer 40000 20000

Due date 500 1000

Source: own elaboration

Fig. 13. Buffer penetration level. Source: own elabora-tion [13]

Page 53: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles52

7.3. Risk class Assessment MethodA first remark about the risk class method to be

made is that the determination of buffer sizes requires some personal interpretation from the user since there are no clear guidelines on what ranges of the relative dispersion should fall within which risk class. This de-cision is completely up to the personal judgment of the user which makes the method a bit subjective.

The final results of the simulation show that with the medium risk method the project buffer is exceeded in 7.35% of the cases, whereas the high risk project buf-fer is only exceeded in 0.05% of the time. This means the high risk method gives us a much higher guarantee that the project is finished before the planned deadline, which of course is a logical consequence of the fact that the high risk method assigns much larger buffers than the medium risk method. As for efficiency, the medium risk method gives a total project buffer of 4.36 weeks which is among the smallest buffers of all the consid-ered techniques. The project buffer length is compara-ble to the lengths to the buffer in the APD, 4.426 weeks, and the Root square, 4.12 weeks, methods. However, the medium risk method results in a slightly higher risk: with the APD method the project buffer is exceed-ed in only 6.30% of the cases and 6.45% with the Root Square method. In the case of the high risk method the total project buffer amounts up to 7.27 weeks, which is comparable to the project buffer of the Original RESM of 6.641 weeks. However, the high risk method results in less risk as the project buffer is only exceeded in 0.05% of the cases, whereas the Original RESM proj-ect buffer is exceeded in 0.20% of the cases (Figures 17 and 18).

7.4. Original RESMWithin the category of reliable, safe methods, this is

one of the best methods to apply for this project. When using the modified version, it is even the best low risk method. With acceptable buffer sizes it guarantees on time delivery in 99.8% of the cases (Figure 18).

7.5. Ashtiani RESMThis method can be considered as very safe re-

garding the possibility of exceeding buffers. If reliabil-ity is of major importance, then this method is recom-mended (Figure 19).

Fig. 15. Buffer penetration. Modified Cut and Paste Method. Source: own elaboration

Fig. 16. Buffer penetration. Modified Medium Risk Class Method. Source: own elaboration

Fig. 17. Buffer penetration. Modified High Risk Class Method. Source: own elaboration

Fig. 18. Buffer penetration. Modified Original RESM.Source: own elaboration

Fig. 19. Buffer penetration. Modified Ashtiani RESM Method. Source: own elaboration

Page 54: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 53

7.6. RSQ MethodThis method is much more risky than the previous

ones. With its smaller buffers, project deadline will not be met in 6.45% of the cases. Although this may seem rather much, if the project manager was a high risk seeker, this would be the best method to use for this project (Figures 14 and 20).

7.7. APD MethodAdaptive Procedure with Density is a method with

rather small buffer sizes. Due to this fact, while using

this method it is more likely to exceed the buffer. In the considered case the project buffer will be exceed-ed in 6.30% of times, which is a rather high percent-age. In terms of a low risk seeker this is an average method compared with the others. On the other hand in terms of “high risk seeker” this method has lower costs compared to the others (Figure 21).

7.8. APRT MethodAdaptive Procedure with Resource Tightness is for

both a low risk seeker as for a high risk seeker an ex-pensive method due to the enormous buffer sizes. On the other hand, while having such big buffers it is rather an exception to exceed the project deadline (Figure 22).

8. ConclusionsThe conclusions will be divided in three parts. The

first part will concern the modifications of the origi-nal methods, based on [9], for the project in question, and the second part will concern the final choice of the buffer sizing method also for the project in ques-tion. The third part will be general conclusions con-cerning the project time management in any project and further research.

8.1. Modification conclusionWhile comparing both the original and modified

buffer sizing and positioning, it becomes obvious that the modified buffer sizing and positioning deliv-ers dominant results over the original buffer sizing and positioning. The modified approaches give both shorter expected project durations and exceed in few-er cases the project buffer.

The explanation for this dominance is dual. First, by limiting the size of the feeding buffers by the size of the total slack available and by adding the remain-ing part of the feeding buffer to the project buffer, the project buffers enlarge. A larger project buffer results in exceeding the project deadline less often. Secondly, by splitting up the feeding buffer 5 (the buffer for the feeding chain with tasks 7, 12 and 13), the proportion by which buffer 5.B (modified) exceeds the total avail-able slack (1 week) is less than buffer 5 (original). This

Fig. 20. Buffer penetration. Modified RSQ Method.Source: own elaboration

Fig. 21. Buffer penetration. Modified APD Method.Source: own elaboration

Fig. 22. Buffer penetration. Modified APRT Method.Source: own elaboration

Table 7. Penalties overview. Modified buffer sizing tech-niques.(I, I.a) – % of times over PB; II – extra time due to (1) PB and (2) buffer vs. slack; (II.a) – extra time due to PB

Technique name Original Modified

I II I.a II.a

Original RESM 9.95% 7.90 0.20% 6.64

Ashtiani RESM 1.10% 15.07 0,00% 11.29

Root Square 11.85% 4.60 6.45% 4,12

APD 23.20% 5.50 6.30% 4.43

APRT 0.00% 20.62 0.00% 16.97

High Risk 0.80% 8.20 0.05% 7.27

Medium Risk 11.80% 4.80 7.35% 4.36

Cut and Paste 0.00% 14.50 0.00% 13.21

Source: own elaboration

Page 55: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles54

reduces the critical chain and the total expected project duration as a consequence. This is shown in Table 7.

8.2. Selection of the Buffer Sizing Method for the Project in Question

Having processed all the data, it is now possible to draw a final conclusion about which buffer loca-tion and sizing technique should be proposed for the project in question. Since it is shown that the modi-fied buffer positioning is better in all the techniques concerning the project length as well as the project risk, the authors opt to make their propositions based only on the modified schedules.

Which technique should be selected depends on how risk-averse the project manager is. In this par-ticular project (Table 7), if the project manager is a low risk seeker, it is suggested to use the buffer sizing technique “Original RESM” which gives a total project length of 28.64 weeks (= CC (22) + PB (6.64)) and it exceeds the project buffer 0.20% of times. In the other case (project manager – a high risk seeker), it is suggested to use the buffer sizing technique “Root square” or Sum of Squares (SSQ) which gives a total project length of 26.12 weeks (= CC(22) + PB(4.12)) and it exceeds the project buffer 6.45% of the times given the information. The project manager of this particular project is working for the government, therefore he is probably risk averse, thus it is sug-gested to use the “Original RESM” technique. These results are confirmed by the penalties in Table 8.

Table 8. Penalties overview. Modified buffer sizing techniques

Penalty low risk seeker

Penalty high risk seeker

Original RESM 3400,73 6681,45

Ashtiani RESM 5642,82 11285,64

Root Square 4640,00 5410,00

APD 4733,11 5686,21

APRT 8485,66 16971,32

High Risk 3656,12 7282,25

Medium Risk 5120,00 5830,00

Cut and Paste 6605,73 13211,45

Source: own elaboration

8.3. General conclusionsThe paper is a case study, which shows clearly that

an efficient application of the critical chain method is possible only if it is preceded by a careful study of how the buffers should be located and sized in the case in question. The authors show that there are sev-eral buffer sizing and location methods which have been proposed, but no general rules are known as to the question when which method should be chosen. The authors formulate a hypothesis that no general rules are possible and that the only way to find out

how the buffers should be sized and located is simu-lation. A fairly easy simulation approach is proposed, which requires generally only accessible software.

However, further research is recommended to find out if the proposed hypothesis is true. The question may be asked if there can be identified some project types for which unequivocal recommendation might me formulated as to the location and size of buffers in the critical chain method application.

AcknowledgementsThis work is supported by Polish National Science

Center grant “The buffer size and location in a proac-tive and reactive risk management of a project dura-tion overrunning”. 2011/01/B/HS4?02821.

AUTHORSAnna Ślusarczyk – a PhD student of Wroclaw Uni-versity of Technology, Institute of Organisation and Management. Dorota Kuchta* – professor at the Institute of Or-ganization and Management, Wroclaw University of Technology, 50-370 Wroclaw, Poland. Head of the research group “Logistics and Project Management”, [email protected] Philip Verhulst, Willem Huyghe, Koen Lauryssen, Torrino Debal – Ghent University, Belgium

*Corresponding author

References

[1] Goldratt E.M., Critical Chain, North River Press, Great Barrington, MA, 1997.

[2] Tukel O.I. et al., “An investigation of buffer sizing techniques in critical chain scheduling”, Euro-pean Journal of Operational Research, vol. 172, 2005, 401–416.

[3] Leach L.P., Critical Chain Project Management, 2nd

Edition, Artech House Inc., 2004.[4] Van de Vonder S. et al, “The trade-off between

stability and makespan in resource-constra-ined project scheduling”, International Jour-nal of Production Research, vol. 44, no. 2, 2006, pp. 215–236.

[5] PMI, A Guide to the Project Management Body of Knowledge, 4th Edition, PMI, 2008.

[6] Shou Y., Yeo K.T., “Estimation of Project Buffers in Critical Chain Project Management”. In: Pro-ceedings of the IEEE international conference on management of innovation and technology. ICTMIT, 2000, pp. 162–167.

[7] Vanhoucke M., Project management. Course Book, Academia Press/Gent., 2011.

[8] Owen C.B., Parameter Estimation for the Beta Distribution, 2008.

[9] Połoński M., Pruszyński K., „Lokalizacja buforów czasu w metodzie łańcucha krytycznego w har-monogramach robot budowlanych – podstawy teoretyczne (cz. I)”, Przegląd budowlany, no. 2, 2008, pp. 45–49. (in Polish)

Page 56: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 55

[10] Ashtiani B. et al., „A new approach for buffer sizing in Critical Chain scheduling, Industrial Engineering and Engineering Management”. In: IEEE International Conference on Industrial Engineering and Engineering Management, 2nd–4th Dec. 2007, pp. 1037–1041. DOI: 10.1109/IEEM.2007.4419350.

[11] Herroelen, W., Leus, R., „On the merits and pitfalls of chain scheduling”, Journal of Operations Mana-gement, 2001, 19, pp. 559–577.

[12] Newbold R.C., Project Management in the Fast Lane: Applying the Theory of Constraints, Saint Lucie Press 1998.

[13] Kuchta D., Ślusarczyk A., “An implementation of buffers for a risk management of delay in pro-jects performed by local governments. Case stu-dy. In: Zarządzanie projektami w nauce i prakty-ce, ed. D. Kuchta, Oficyna Wydawnicza Politech-niki Wrocławskiej, Wrocław 2011.

Appendix

Fig. 24. Risk Classes – Medium Safety Method

Fig. 23. Copy Paste Method

Fig. 25. Risk Classes – High Safety Method

Fig. 26. Sum of Squares Method

Fig. 27. Ashtiani RESM Method

Page 57: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles56

Fig. 28. Modified Copy Paste Method

Fig. 29. Modified Risk Classes – Medium Safety Method

Fig. 30. Modified Risk Classes – High Safety Method

Fig. 31. Modified Sum of Squares Method

Fig. 32. Modified RESM Method

Fig. 33. Modified Ashtiani RESM Method

Page 58: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 57

Rough Surface Description System in 2,5D Map For Mobile Robot Navigation

Adam Niewola

Submitted: 26th December 2012; accepted: 17th May 2013

Rough Surface Description System in 2,5D Map For Mobile Robot Navigation Submitted 26th December 2012; accepted 17th May 2013

Adam Niewola Abstract: This paper presents a new solution in the roughness description based on 2,5D map. Three parameters for rough surface description were proposed. The research was performed to verify which parameter provides the best combination of time calculation and accuracy of the terrain roughness reconstruction. Prepared map may be used for mobile robot path planning.

Keywords: mobile robot, navigation, 2,5D map

1. Introduction Mobile robots are used in wide range of tasks.

Robots which deserve a particular attention are the-se ones which operate in the rough terrain, for ex-ample exploration robots or robots used by army.

The mobile robots navigation problem involves several aspects. The localization is to find the posi-tion and orientation of the robot in the workspace in relation to the map of the robot [15]. The path plan-ning includes finding the optimal path from the start to the goal position in relation to applied criteria. The next step, robot follows generated trajectory and collects the information about the environment which is used to update the map.

The map of the environment has a big impact on the accuracy of reaching the goal position as well as the possibility of passing generated path. The issue of the path planning process is very often considered by the scientists. There are several types of ap-proaches which may be divided into two groups.

Roadmap methods are to create a net of roads where robot may move without any collision. This leads to the creation of graph which afterwards is searched to find an optimal path. Roads net may be created with the use of different methods. In visibil-ity graph method a graph is created by connecting vertices of obstacles. In the Voronoi diagram method roads are created by pointing out the paths whose distances to two neighboring obstacles are equal. Graph vertices are placed in the points where 3 paths connect [9].

The created graph may be searched with the use of different algorithms, e.g. the depth first algorithm, Dijkstra’s algorithm and A* algorithm. The last one is very common. Many modifications of this algorithm were performed for implementing it in various ap-plications (e.g. for fast replanning – [6] and [7]).

Another type of path planning methods are po-tential methods [9]. They used a potential field cre-

ated with the use of the start and the goal position and the positions and shapes of obstacles. The main problem in these methods are the local minima which exists in potential fields. They prevent the goal position from being achieved. There are several ways of avoiding of the local minima ([10]).

There is a wide range of relatively novel algo-rithms for path planning, e.g. genetic algorithms, memetic algorithms ([12]) and probabilistic meth-ods ([13] and [14]).

There are several types of maps used for the mo-bile robots. The most common are the 2D cell maps with the status (free, occupied or unknown) assigned to each cell. This approach cannot be applied for the mobile robots that operate in an open environment with a high roughness level. The cell description as free or occupied is not enough because some obsta-cles may be overcome with higher energy consump-tion, some of them may be overcome only in one way (e.g. moving down the hill may be possible in con-trast to moving up). Except that there some obstacles occurs, like small water tanks, sandy areas or small plants which robot may also overcome but with lim-ited velocity. The environment like this makes that for the effective path planning there is a need of ap-propriate terrain description in the workspace map.

Maps commonly used in mobile robots systems may be divided in three groups [1], [2]:

— the occupancy grids – the 2D cell maps (square, rectangular, hexagonal) with status assigned to each cell; the distance between two cells represents real distance between two points,

— the topological maps – the graph structured maps which show connections between the most important elements of the environment; the distance between two elements of the map does not correspond to the distance between them in the real environment,

— the hybrid maps – the topological-metric maps which are built as the system of local oc-cupancy grids which are the graph nodes in the global topological map.

Maps which are used for the mobile robots which operate in rough terrain are most commonly 2,5D maps ([3],[4],[5]). They are an expansion of standard 2D maps. Each cell may store a number of parame-ters for appropriate terrain description.

There are also some combined approaches. In [11] a 2D map each cell is subdivided in 16×16 2,5D maps with height parameter, roughness parameter and slope parameter. All of these parameters are used to compute

Rough Surface Description System in 2,5D Map For Mobile Robot Navigation Submitted 26th December 2012; accepted 17th May 2013

Adam Niewola Abstract: This paper presents a new solution in the roughness description based on 2,5D map. Three parameters for rough surface description were proposed. The research was performed to verify which parameter provides the best combination of time calculation and accuracy of the terrain roughness reconstruction. Prepared map may be used for mobile robot path planning.

Keywords: mobile robot, navigation, 2,5D map

1. Introduction Mobile robots are used in wide range of tasks.

Robots which deserve a particular attention are the-se ones which operate in the rough terrain, for ex-ample exploration robots or robots used by army.

The mobile robots navigation problem involves several aspects. The localization is to find the posi-tion and orientation of the robot in the workspace in relation to the map of the robot [15]. The path plan-ning includes finding the optimal path from the start to the goal position in relation to applied criteria. The next step, robot follows generated trajectory and collects the information about the environment which is used to update the map.

The map of the environment has a big impact on the accuracy of reaching the goal position as well as the possibility of passing generated path. The issue of the path planning process is very often considered by the scientists. There are several types of ap-proaches which may be divided into two groups.

Roadmap methods are to create a net of roads where robot may move without any collision. This leads to the creation of graph which afterwards is searched to find an optimal path. Roads net may be created with the use of different methods. In visibil-ity graph method a graph is created by connecting vertices of obstacles. In the Voronoi diagram method roads are created by pointing out the paths whose distances to two neighboring obstacles are equal. Graph vertices are placed in the points where 3 paths connect [9].

The created graph may be searched with the use of different algorithms, e.g. the depth first algorithm, Dijkstra’s algorithm and A* algorithm. The last one is very common. Many modifications of this algorithm were performed for implementing it in various ap-plications (e.g. for fast replanning – [6] and [7]).

Another type of path planning methods are po-tential methods [9]. They used a potential field cre-

ated with the use of the start and the goal position and the positions and shapes of obstacles. The main problem in these methods are the local minima which exists in potential fields. They prevent the goal position from being achieved. There are several ways of avoiding of the local minima ([10]).

There is a wide range of relatively novel algo-rithms for path planning, e.g. genetic algorithms, memetic algorithms ([12]) and probabilistic meth-ods ([13] and [14]).

There are several types of maps used for the mo-bile robots. The most common are the 2D cell maps with the status (free, occupied or unknown) assigned to each cell. This approach cannot be applied for the mobile robots that operate in an open environment with a high roughness level. The cell description as free or occupied is not enough because some obsta-cles may be overcome with higher energy consump-tion, some of them may be overcome only in one way (e.g. moving down the hill may be possible in con-trast to moving up). Except that there some obstacles occurs, like small water tanks, sandy areas or small plants which robot may also overcome but with lim-ited velocity. The environment like this makes that for the effective path planning there is a need of ap-propriate terrain description in the workspace map.

Maps commonly used in mobile robots systems may be divided in three groups [1], [2]:

— the occupancy grids – the 2D cell maps (square, rectangular, hexagonal) with status assigned to each cell; the distance between two cells represents real distance between two points,

— the topological maps – the graph structured maps which show connections between the most important elements of the environment; the distance between two elements of the map does not correspond to the distance between them in the real environment,

— the hybrid maps – the topological-metric maps which are built as the system of local oc-cupancy grids which are the graph nodes in the global topological map.

Maps which are used for the mobile robots which operate in rough terrain are most commonly 2,5D maps ([3],[4],[5]). They are an expansion of standard 2D maps. Each cell may store a number of parame-ters for appropriate terrain description.

There are also some combined approaches. In [11] a 2D map each cell is subdivided in 16×16 2,5D maps with height parameter, roughness parameter and slope parameter. All of these parameters are used to compute

Page 59: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles58

the traversability index gathering the information about the ease-of-traverse of the terrain.

In some cases 2,5D map is not enough and then 3D maps are used. They are mainly used when robot operates in 3D environment (e.g. underwater mobile robots or air mobile robots). They are rarely pro-posed for land mobile robots due to high number of empty cells.

In [15] a new method for 3D map building is pro-posed. It uses the stereo vision for gathering the information about the terrain height. It does not need the disparity map and that is why it reduces the computation cost and can be used in the real-time systems.

2. Information gathering

In this paper an attempt was made to find a set of parameters describing the rough surface for mobile robot navigation. This paper deals with gathering information by the robot and does not take into account the localization process of mobile robot.

The parameters required for rough surface de-scription from the point of view of path planning are mentioned below:

— the altitude parameter; — the small roughness description parameter; — the information confidence parameter. Research scene was the point cloud of rough sur-

face terrain received from 3D laser scan. The most common laser scanners are:

— the triangulation scanners – provide the in-formation about the distance with the use of the angle of reflection;

— the time reflection scanners – measure the time between stream emission and detection;

— the phase shift scanners – similar to the time reflection scanners; they provide higher accu-racy thanks to phase shift measurement.

The measurement error is an integral part of each measurement. In the laser scanners, particularly these scanners which are installed on the mobile robots, the error is caused by inaccurate positioning of the scanner in relation to robot. It is also caused by the errors in the robot localization process [16]. The point coordinates obtained by the scanner in global map coordinates system may be noted:

, = , , , (1)

where: rP,0 – the P point position in the base coordi-nates system, T0,R – the transformation matrix from the base coordinates system to the mobile robot coordinates system, TR,S – the transformation matrix from the robot coordinates system to the scanner coordinates system, rP,S – the P point position in the scanner coordinates system.

Due to errors mentioned above, the point obtained from the scanner is not described by 3 coordinates in base coordinates system but it is a sphere with a range where its coordinates may vary. This may be shown as an ellipsoid where the coordinates of the obtained point may occur with highest probability (Fig. 1).

On the basis of laser scans of the rough surface terrain several tests were performed in order to verify which parameters should be used to describe the mobile robot environment in 2,5D map.

The parameters which were taken into consider-ation were:

Fig. 1. The error of the point coordinates calculation by the laser scanner

— the parameter describing altitude – average height, maximum height and mean range height (definitions below) from each cell,

— the roughness index – standard deviation and range in each cell,

— the confidence level index – confidence level and confidence factor.

The view of one of the tested areas is shown in the Fig. 2.

Fig. 2. View of tested area

3. The altitude description The cell altitude was described in 3 ways: — the average height – equation (2), — the mean range height – equation (3), — the maximum height – equation (4).

ℎ = 1 ℎ

(2)

ℎ = ℎ + ℎ − ℎ2 (3)

ℎ = ℎ (4) where: hcell – cell altitude, hi – point height, n – num-ber of points in the cell, hmax – maximum height of point in the cell, hmin – minimum height of point in the cell.

Results of the average height computation de-

pending on the cell dimensions were shown in Fig. 3 as an example. When cell dimensions decrease the

Page 60: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 59

precision of description of the environment in the map increases.

Fig. 3a. Terrain description with the use of average height (cell 100x100 mm)

Fig. 3b. Terrain description with the use of average height (cell 200x200 mm)

Fig. 3c. Terrain description with the use of average height (cell 500x500 mm)

Fig. 3d. View of tested area

Parameters describing the average height were tested for different cell dimensions. Time calculation for the whole map, as well as one-point-adding to the map, were verified. Two algorithms were imple-mented in cell heights calculation process for the whole map. First one was to assign each point to its cell, and then to calculate an average value in each cell (dividing and calculating algorithm). The second algorithm was calculating average value point-by-point. Each point was added to its cell and then the average value in the cell was updated (point-by-point updating algorithm).

The map dividing and average height calculating algorithm has the O(n2) time complexity. The point-by-point calculation algorithm has the O(n) time complexity.

Fig. 4. Average height calculation time for the whole map with 100x100 mm cells with the use of dividing and calculating algorithm

Fig. 5. Average height calculation time for the whole map with 100x100 mm cells with the use of point-by-point updating algorithm

Fig. 6. Average height calculation time for the whole map consisted of ~17000 points with the use of divid-ing and calculating algorithm

100

0

100

200

300

40010

040

070

010

0013

00

1600

1900

2200

2500

2800

y [mm]

hcell [mm]

x [mm]

1000

200

0

100

200

300

400

y [mm]

hcell [mm]

x [mm]

1000

500

0

100

200

300

500 1000 1500 2000 2500 3000 y [mm]

hcell [mm]

x [mm]

1000

00,20,40,60,8

11,21,41,61,8

0 20000 40000 60000 80000

calc

ulat

ion

tim

e [s

]

number of map points

0

0,01

0,02

0,03

0,04

0,05

0,06

0,07

0,08

0,09

0 20000 40000 60000 80000

calc

ulat

ion

tim

e [s

]

number of map points

0

0,05

0,1

0,15

0,2

0,25

0,3

0 500 1000 1500 2000

calc

ulat

ion

tim

e [s

]

cell dimension [mm]

Page 61: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles60

Fig. 7. Average height calculation time for the whole map consisted of ~64000 points with the use of point-by-point updating algorithm

Comparison of these 2 algorithms for the average value calculation was shown in Figs. 4 and 5. It con-firms the time complexity of these two methods.

When cell dimensions increase, time calculation de-creases while using the first algorithm (Fig. 6). When alternative algorithm was used, time calculation was constant while increasing the cell dimensions (Fig. 7).

Cell dimensions (map granularity) should be com-bined with the dimensions of the robot. Because of the calculation time, big dimensions of cell are demand. However, due to the precision of the terrain description smaller cells are more suitable (10–30% of the robot width).

In the mobile robots applications more important than calculating the heights for the whole map (map preparation for navigation) may be map updating with the use of the information from the sensors. It takes into consideration adding single points to the map obtained from the laser scanner. Updating the new average value requires the use of the formula:

ℎ_ = ℎ + ℎ

+ 1

(5)

where: hcell_new – updated cell average height value, hnew – height value for new point, n – number of points in the cell.

Every time when a new point is obtained by the scan-

ner, average value in one (or more) cells should be updat-ed. Whereas using other cell height parameters (e.g. maxi-mum height or mean range value) cell height value may be updated only when new point has higher value than maxi-mum in the cell or lower than minimum.

Comparison of these three parameters as a cell height value was shown in Tab. 1.

Tab. 1. Cell height value calculation after single point adding

parameter calculation time for add-ing of 1000 new points to

the map [ms] average height 0,042

mean range height 0,034 maximum height 0,028

Results of the single point adding allow to sample new points from the terrain with the frequency

of MHz. It is more than typical laser scanners (up to 100 kHz) offer. However, results shown in Tab. 1 may be far from single point adding in real mobile robot system due to mobile robot localization time consumption but it is out of scope of this paper. Ac-cording to the obtained results, calculation time should not be factor which can be used to decide which parameter may be used to describe the rough surface. The average height value was chosen be-cause it provides the best accuracy. In proposed solution terrain slope will be described by the cell heights differences.

4. Roughness Index

The roughness index is the parameter which de-scribes the height differences in each cell. The aver-age height provides the information about the differ-ences in the height of the cells and the roughness index is used to inform the path planner how the terrain height may vary in the cell. The average height smoothes the information about the terrain height and that is why the use of the parameter for roughness is also important. It provides the infor-mation about sudden jumps, big slopes and other vertical obstacles (e.g. walls). The roughness index will not need to be used, if the cell dimensions are small enough to provide the information about the vertical obstacles but it is connected with longer computation time.

The roughness may be described as the local dif-ference in the terrain height. Cell height difference may provide good results only when cell dimensions are small enough. In other case there should be a parameter that shows the level of roughness in each cell. This parameter can be:

— the standard deviation of height in each cell:

= ∑ (ℎ − ℎ)

− 1

(6)

— the height range in each cell:

= ℎ − ℎ (7)

The range calculation time is much shorter and does not require conversion the heights to real type. Evaluation of the new range in the cell after adding new point to the cell is also much faster. Calculating the new standard deviation after adding new point to the map requires the use of the formula:

_ = 1 ( − 1)

+ ℎ

− ( + 1)ℎ_

+ ℎ

,

(8)

where: Scell_new – new standard deviation, Scell – standard deviation before adding new point, hcell – average height before adding new point, hcell_new – average height after adding the new point, hnew – height of new point, n – number of points in the cell (before adding new point).

0

0,02

0,04

0,06

0,08

0,1

0 200 400 600 800 1000

calc

ulat

ion

tim

e [s

]

cell dimension [mm]

Fig. 7. Average height calculation time for the whole map consisted of ~64000 points with the use of point-by-point updating algorithm

Comparison of these 2 algorithms for the average value calculation was shown in Figs. 4 and 5. It con-firms the time complexity of these two methods.

When cell dimensions increase, time calculation de-creases while using the first algorithm (Fig. 6). When alternative algorithm was used, time calculation was constant while increasing the cell dimensions (Fig. 7).

Cell dimensions (map granularity) should be com-bined with the dimensions of the robot. Because of the calculation time, big dimensions of cell are demand. However, due to the precision of the terrain description smaller cells are more suitable (10–30% of the robot width).

In the mobile robots applications more important than calculating the heights for the whole map (map preparation for navigation) may be map updating with the use of the information from the sensors. It takes into consideration adding single points to the map obtained from the laser scanner. Updating the new average value requires the use of the formula:

ℎ_ = ℎ + ℎ

+ 1

(5)

where: hcell_new – updated cell average height value, hnew – height value for new point, n – number of points in the cell.

Every time when a new point is obtained by the scan-

ner, average value in one (or more) cells should be updat-ed. Whereas using other cell height parameters (e.g. maxi-mum height or mean range value) cell height value may be updated only when new point has higher value than maxi-mum in the cell or lower than minimum.

Comparison of these three parameters as a cell height value was shown in Tab. 1.

Tab. 1. Cell height value calculation after single point adding

parameter calculation time for add-ing of 1000 new points to

the map [ms] average height 0,042

mean range height 0,034 maximum height 0,028

Results of the single point adding allow to sample new points from the terrain with the frequency

of MHz. It is more than typical laser scanners (up to 100 kHz) offer. However, results shown in Tab. 1 may be far from single point adding in real mobile robot system due to mobile robot localization time consumption but it is out of scope of this paper. Ac-cording to the obtained results, calculation time should not be factor which can be used to decide which parameter may be used to describe the rough surface. The average height value was chosen be-cause it provides the best accuracy. In proposed solution terrain slope will be described by the cell heights differences.

4. Roughness Index

The roughness index is the parameter which de-scribes the height differences in each cell. The aver-age height provides the information about the differ-ences in the height of the cells and the roughness index is used to inform the path planner how the terrain height may vary in the cell. The average height smoothes the information about the terrain height and that is why the use of the parameter for roughness is also important. It provides the infor-mation about sudden jumps, big slopes and other vertical obstacles (e.g. walls). The roughness index will not need to be used, if the cell dimensions are small enough to provide the information about the vertical obstacles but it is connected with longer computation time.

The roughness may be described as the local dif-ference in the terrain height. Cell height difference may provide good results only when cell dimensions are small enough. In other case there should be a parameter that shows the level of roughness in each cell. This parameter can be:

— the standard deviation of height in each cell:

= ∑ (ℎ − ℎ)

− 1

(6)

— the height range in each cell:

= ℎ − ℎ (7)

The range calculation time is much shorter and does not require conversion the heights to real type. Evaluation of the new range in the cell after adding new point to the cell is also much faster. Calculating the new standard deviation after adding new point to the map requires the use of the formula:

_ = 1 ( − 1)

+ ℎ

− ( + 1)ℎ_

+ ℎ

,

(8)

where: Scell_new – new standard deviation, Scell – standard deviation before adding new point, hcell – average height before adding new point, hcell_new – average height after adding the new point, hnew – height of new point, n – number of points in the cell (before adding new point).

0

0,02

0,04

0,06

0,08

0,1

0 200 400 600 800 1000

calc

ulat

ion

tim

e [s

]

cell dimension [mm]

Page 62: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 61

Comparison of the range and standard deviation calculation was shown in Tab. 2.

Tab. 2. Cell roughness index calculation after single point adding

parameter calculation time for add-ing of 1000 new points to

the map [ms] range 0,078

standard deviation 0,030

Results shown in Tab. 2 also allows for sampling the environment with higher frequency than typical laser scanners provides. Because range calculating is much shorter and does not require using real num-bers format, this parameter was chosen to describe the roughness in each cell.

5. Confidence Factor

The confidence level of the gathered data may be described using the statistical definition of the confi-dence level. Because of evaluating the cell height value with average height value, the expected value of height in the cell μcell may vary with the confidence level 1-α between:

ℎ /

√ ≤ ≤ ℎ /

√ (9)

where uα/2 – α/2-quantile of N(0,1) normal distribu-tion.

Assuming the applied value of the permissible var-

iability of confidence level may be calculated as:

1 = 12 ,

where: d – the permissible variability of

(10)

It can be graphically shown as the field below the

probability density of the normal distribution chart (Fig. 8).

Fig. 8. Confidence level

Equation (10) may be also written with use of the

distribution function F of the N(0,1) distribution:

1 = 2 √2

1

(11)

The confidence level calculated this way requires also calculating the standard deviation. It also does not take into consideration errors of each point co-ordinates evaluation by the laser scanner.

Due to these errors each point may belong to more than one cell and the probability that point belongs to the cell may be calculated geometrically (Fig. 9) according to equation:

=

(12)

where: pnew – probability of point belonging to the cell, Aerror – total area of an error ellipsoid projected to the (x,y) plane, Aerr∩cell – intersection area of an error ellipsoid projected to the (x,y) plane with the area of cell.

Fig. 9. The probability of belonging to the cell

Assumption that the point errors are an ellipsoid

may be too complex due to Aerr∩cell calculation prob-lems. It may be easier to assume that the point errors are cuboid or cube.

Considering the point as a volume of points caus-es the need of modification of equation (5) to the form:

ℎ_ = ℎ ℎ

(13)

where: w – total weight of all points in the cell cur-rently, wnew – weight of new point which is being added to the map in current step.

Proposed wnew calculation is based on rx, ry, rz

point errors and pnew probability of point belonging to the cell:

=

(14)

where: a – length of the side of each cell. The cell dimension a which appears in equation

(14) does not have an impact on the calculated aver-age value but it has an impact on the value of confi-dence factor according to equation (15).

According to the requirements mentioned below a new confidence factor CF was proposed:

Page 63: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles62

— CF has non-negative values, it increases when the information confidence becomes lower;

— CF decreases while increasing the number of points in the cell;

— when the number of points is big, adding a new point to the cell does not have a big im-pact at the CF factor.

Proposed equation for calculating the confidence factor was shown below:

= 1∑

(15)

where: wi – weight of the point, di – distance between the point height value and the current average height value in the cell (in the case when it is equal to zero, there is a need to evaluate di as a sufficiently small number to avoid dividing by zero).

When updating the map point-by-point another

formula may be used: = 1

1 +

(16)

where: CF – confidence factor from previous step, CFnew – updated confidence factor, wnew – weight of new point, currently being added to the map, dnew – distance between new point height and updated average height.

Charts from Fig. 10 to Fig. 13 shows changes of confidence factor depending on changes of other elements of CF equation.

Fig. 10. Relation of confidence factor and number of points in the cell for different values of points proba-bilities (a, hi, rx, ry, rz where the same for each point)

Fig. 11. Relation of confidence factor and number of points in the cell for different point errors values (a, hi, pi where the same for each point)

Fig. 12. Relation of confidence factor and probability of belonging to the cell for the same point added to the cell for the 1st, 3rd and 5th time (a, hi, rx, ry, rz where the same for each point)

Fig. 13. Relation of confidence factor and rz error of the point for different values of points probabilities (a, hi, rx, ry where the same for each point). 6. Conclusions

In the 2,5D map used for description of the rough terrain for mobile robot navigation there is a prob-lem of selecting appropriate parameters to describe height, slope, roughness and information confidence. As it was shown calculating height in each cell after adding new point to the map does not need large computational requirements (for each of selected parameters). For sure, the robot localization prob-lem has an impact on the calculation time while add-ing a new point to the map from the laser scanner measurements. However it was out of scope of this paper. The main advantage of proposed solution of 2,5D map building was distinction of the height and slope description from the roughness description and the confidence level description.

Further research will focus on the path planning algorithms with use of the proposed 2,5D map and the development of proposed rough terrain descrip-tion system. AUTHOR

Adam Niewola – Technical University of Lodz, Insti-tute of Machine Tools and Production Engineering, Stefanowskiego 1/15 St.; 90-924 Łódź; [email protected]

0

10

20

30

40

0 5 10 15

conf

iden

ce fa

ctor

number of points added to the cellpnew = 1 pnew=0,5 pnew=0,3

0

20

40

60

80

0 5 10 15

conf

iden

ce fa

ctor

number of points added to the cellrx=ry=rz=20 mm rx=ry=rz=15 mm rx=ry=rz=10 mm

0

20

40

60

80

100

0 0,2 0,4 0,6 0,8 1

conf

iden

ce fa

ctor

probability of the point belonging to the cell

1st point added 3rd point added 5th point added

0

50

100

150

200

250

300

350

0 20 40 60 80 100

conf

iden

ce le

vel

rz error [mm]

pnew=1 pnew=0,5 pnew=0,3

Page 64: Jamris 2013 vol 7 no 3

Journal of Automation, Mobile Robotics & Intelligent Systems VOLUME 7, N° 3 2013

Articles 63

References

[1] Tarutoko Y., Kobayashi K., Watanabe K., “Topo-logical Map Generation based on Delaunay Tri-angulation for Mobile Robot”. In: SICE-ICASE In-ternational Joint Conference, 2006, p. 492.

[2] Siemiątkowska B., Szklarski J., Gnatowski M., Zychewicz A., “Budowa hybrydowej semantycz-no-rastrowej reprezentacji otoczenia robota”, Pomiary Automatyka Kontrola, vol. 3, 2010, pp. 279–280 (in Polish).

[3] Ye C., JBorenstein J., “A Method for Mobile Robot Navigation on Rough Terrain”. In: International Conference on Robotics and Automation, New Orleans 2004, pp. 3863–3865.

[4] Triebel R., Patrick Pfaff P., Burgard W., “Multi-Level Surface Maps for Outdoor Terrain Map-ping and Loop Closing”. In: International Con-ference on Intelligent Robots and Systems, Bei-jing, 2006, pp. 2276–2277.

[5] Fong E.H.L., Adams W., Crabbe F.L., Schultz A.C., “Representing a 3-D Environment with a 2,5-D Map Structure”. In: International Conference on Intelligent Robots and Systems, Las Vegas, 2003, pp. 2987–2989.

[6] Podsędkowski L., Nowakowski J., Idzikowski M., Visvary I., “Modified A* algorithm suitable for on-line car-like mobile robots control”, Robot Motion and Control, 1999, pp. 235–240.

[7] Podsędkowski L., Nowakowski J., Idzikowski M., Visvary I., “A new solution for path planning in partially known or unknown environment for nonholonomic mobile robots”, Robotics and Au-tonomous Systems, vol. 34, issues 2–3, 2001, pp. 142–152.

[8] Porta Garcia M.A., Montiel O., Castillo O., Sepulveda R., Melin P., “Path planning for au-tonomous mobile robot navigation with ant col-ony optimization and fuzzy cost function evalu-ation”, Applied Soft Computing, vol. 9, issue 3, 2009, pp. 1102–1110.

[9] Latombe J.-C., Robot Motion Planning, Kluwer Accademic Publishers, 1991, pp. 297–313.

[10] Tang L., Dian S., Gu G., Zhou K., Wang S., Feng X., “A novel Potential Field Method for Obstacle Avoidance and Path Planning of Mobile Robot”, IEEE International Conference on Computer Sci-ence and Information Technology, vol. 9, 2010.

[11] Gu J., Cao Q., “Path planning using hybrid grid representation on rough terrain”, Industrial Ro-bot: An International Journal, vol. 36, no. 5, 2009, pp. 497–502.

[12] Bigaj P., “A memetic algorithm for global path planning with movement constraints for a non-holonomic mobile robot”, PhD Dissertation, Sys-tem Research Institute, Polish Academy of Sci-ences, publ. by PIAP, Warsaw 2012. ISBN: 978-83-61278-16-0.

[13] Amato N.M., Wu Y., “A Randomized Roadmap method for Path Planning and Manipulation Planning”. In: IEEE International Conference on Robotics and Automation, 1996, pp. 113–120.

[14] Lingelbach F., “Path Planning of Mobile Manipu-lation using Probabilistic Cell Decomposition”.

In: IEEE/RSJ International Conference on Intelli-gent Robots and Systems, 2004, pp. 2807–2812.

[15] Podsędkowski L., Idzikowski M., “Robot locali-sation methods using the laser scanners”. In: 4th International Workshop on Robot Motion and Control, Puszczykowo, Poland, 2004, Robot Mo-tion and Control: Recent Developments, 2006, vol. 335, pp. 315–332.

[16] Idzikowski M., Podsędkowski L., “Simulation and experimental results of a new method for mobile robot localization”. In: 5th International Workshop on Robot Motion and Control, Puszczykowo, Poland, 23rd–25th June 2005, pp. 175–179.