2013 Proceedings - Section 3

46
 SECTION 3 DYNAMIC SYSTEMS & CONTROL  ASME Dist rict F - E CTC 2013 Proceedin gs - Vol. 12 87

description

2013 proceedings-section 3

Transcript of 2013 Proceedings - Section 3

  • SECTION 3

    DYNAMIC SYSTEMS & CONTROL

  • ASME District F - Early Career Technical Conference Proceedings ASME District F - Early Career Technical Conference, ASME District F ECTC 2013

    November 2 3, 2013 - Birmingham, Alabama USA

    VIBRATION OF ATOMIC FORCE MICROSCOPY USING HAMILTONS PRINCIPLE

    Rachael McCarty, M. Haroon Sheik, Kyle Hodges, and Nima Mahmoodi University of Alabama Tuscaloosa, AL, USA

    ABSTRACT Atomic force microscopy (AFM) uses a scanning process

    performed by a microcantilever beam to create a three

    dimensional image of a physical surface that is accurate on a

    nano-scale level. AFM includes a microcantilever beam with a

    tip at the end that is controlled in order to keep the force

    between the tip and the surface constant by changing the

    distance of the microcantilever from the surface. An accurate

    understanding of the microcantilever motion and tip-sample

    force is needed to generate accurate imaging. In this paper,

    Hamiltons Principle and the Galerkin Method are employed to investigate the vibration of the microcantilever probe used in

    tapping mode AFM. The goal is to compare two different

    methods of including contact and excitation force in the

    equations of motion and boundary conditions. The first case

    considers contact force at the tip to be a part of the boundary

    conditions of the beam. The second case assumes that the force

    is a concentrated force that is applied in the equations of

    motion, and the boundary conditions are similar to the free end

    of a cantilever beam. For the two cases, the equations of

    motion, the modal shape functions including the natural

    frequencies, the time response functions, and the complete

    beam mechanics are obtained. The first natural frequencies of

    the two models are also compared to each other and to the

    experimental natural frequency of the Bruker Innova AFM with

    MPP-11123-10 microcantilever. For both cases, results are

    shown for neglecting and including the effects of the tip mass.

    I. INTRODUCTION Atomic Force Microscopy (AFM) was originally invented

    and used for nano-scale scanning to create a three dimensional

    image of a physical surface. The scanning process is performed

    by a microcantilever that contacts or taps the surface. More

    recently, microcantilever probes have been used extensively for

    Friction Force Microscopy (FFM), Lateral Force Microscopy

    (LFM), Piezo-response Force Microscopy (PFM), biosensing,

    and other applications [1-4]. Most AFMs operate by exciting

    the microcantilever using a piezoelectric tube actuator at the

    base of the probe. However, some microcantilevers have a layer

    of piezoelectric material on one side for actuation purposes.

    This layer is usually ZnO [5] or Lead Zirconate Titanate (PZT).

    The application of the piezoelectric microcantilever is

    widespread; it has been used for force microscopy, Scanning

    Near-field Optical Microscopy (SNOM), biosensing, and

    chemical sensing [6-9]. An accurate understanding of the

    microcantilever motion and tip-sample force is needed to

    generate accurate imaging.

    The force between tip and sample consists of two main

    forces: Van der Waals force and contact force [10]. In non-

    contact mode, there is only Van der Waals force between the

    AFM tip and sample. However, in a tapping contact AFM, both

    forces are applied to the tip. In this work, only the linear

    contact force is considered, since it is much larger than the Van

    der Waals force.

    The dynamics of the microcantilever have been

    experimentally and analytically studied in some research

    works. Experimental investigations have been performed in air

    and liquid on dynamic AFMs, and the frequency response of

    the systems were obtained [11-13]. Nonlinear dynamics of a

    piezoelectric microcantilever have been studied considering the

    nonlinearity due to curvature, and piezoelectric material [14,

    15]. In other works, linear dynamic models have been

    developed for contact AFM probes, and numerically solved

    [16-20].

    When including the force at the end of the beam during

    mathematical analysis of beam mechanics and vibrations, two

    methods are currently used in research works. One method is to

    consider the force at the end of the beam to be a part of the

    equation of motion using some type of step function such as the

    Heaviside function [14, 15, 19, 21, 22]. The other method is to

    consider the force in the boundary conditions [11, 16, 17, 23,

    24, 25].

    This research work investigates the vibration of dynamic

    tapping mode AFM for these two different methods of analysis.

    For the two different cases, the equations of motion are derived

    using Hamiltons Principle, the modal shape functions including the natural frequency are obtained using separation of

    variables, and the time response functions are obtained using

    Galerkin Method. The microcantilever beam is modeled with a

    spring on the free end to approximate the linear contact force

    [26]. The first case considers contact and excitation force at the

  • tip to be a part of the boundary conditions of the beam. The

    second case assumes that the force is a concentrated force that

    is applied in the equations of motion, and the boundary

    conditions are similar to the free end of a cantilever beam. The

    results of these two cases will be compared. The natural

    frequencies of the two models are also compared to each other

    and to the experimental natural frequency of the Bruker Innova

    AFM with MPP-11123-10 microcantilever.

    Additionally, most research works neglect the effect of tip

    mass completely from the equation of motion and boundary

    conditions [11-25]. In this work, the tip mass is included and

    the results will be compared for including and excluding the tip

    mass to determine the validity of the commonly used approach

    of neglecting the tip mass.

    II. METHODS The governing equations of motion, natural frequencies,

    mode shapes, and time response functions for the dynamics of a

    microcantilever are mathematically derived in this Section for

    the case of 1) a spring at the free end with the contact force at

    the tip considered to be a part of the boundary conditions of the

    beam and 2) a spring at the free end where the contact force is

    considered to be a concentrated force that is applied in the

    equations of motion and the boundary conditions.

    Figure 1 shows a cantilever beam system with a spring

    attached to the free end. The bending displacement of the beam

    in the negative z direction at position x along the beam and time

    t is w(x,t). The coordinate system (x, z) describes the dynamics

    of the microcantilever, and t denotes time.

    Figure 1. Cantilever beam with a spring attached to the free end.

    2.1 Case 1: Force considered in Boundary Conditions

    The relevant equation from Hamiltons Principle is

    1

    0

    0t

    tdtWUT , (1)

    where T is kinetic energy, U is potential energy, and W is the

    work done by external loads on the beam. To derive the

    equations of motion, expressions for kinetic energy, potential

    energy, and external work will need to be derived. First, the

    expression for kinetic energy is derived. The kinetic energy will

    be the combined kinetic energy of the beam (Tb) and the tip

    (Ttip). In order to be able to compare the results including and

    excluding the tip mass, the kinetic energy of each must be

    separate.

    LL

    tipbt

    wdx

    t

    wTTT

    0

    2

    2

    2

    1 mm , (2)

    where m1 is the mass per unit length of the beam, m2 is the tip

    mass, and L is the length of the beam. Also, wL is the

    displacement of the microcantilever at the free end and is a

    function of time.

    The expression for potential energy comes from two

    sources. Ub is the potential energy due to the strain energy of

    the beam, and Us is the potential energy due to the spring.

    L

    b dxx

    wU

    0

    2

    2

    2

    EI , (3)

    Lw

    LLS dwkwU0

    2Lkw , (4)

    where E is the elastic modulus of the beam, I is the mass

    moment of inertia of the beam, and k is the spring constant.

    The external work is

    LwtFW sin , (5)

    where F and are the amplitude and frequency of the excitation force.

    Substituting Equations (2)-(5) into Equation (1) and

    simplifying results in the equation of motion and boundary

    conditions for bending vibrations of the microcantilever of the

    AFM shown in Figure 1 for Case 1:

    0)(1

    ivEIwwm , (6)

    00 w , 00 w , 0Lw , tFwmkwwEI LLL sin2 . (7)

    For simplification, apostrophes w denote the partial derivative with respect to x, and dots w denote the partial derivative with respect to time. The following substitution is

    helpful for analyzing the full response of the beam:

    ),(,, txatxvtxw , (8)

    where

    33

    2210),( xtaxtaxtatatxa . (9)

    To solve for a(x,t), the first three boundary conditions from

    Equation (7) are implemented. This results in:

    23 3),( LxxtBtxa . (10)

    To solve for B(t), the fourth boundary condition from Equation

    (7) is used. This results in the second order ordinary differential

    equation:

    t

    Lm

    FtBdtB sin

    2 32

    2 , (11)

    where

    32

    32

    2

    26

    Lm

    kLEId

    . (12)

  • Any solution of Equation (11) will work in Equation (10). The

    following solution is chosen:

    tCtB sin , (13)

    where

    2232 dLF

    C . (14)

    Now Equation (8) becomes

    tLxxCtxvtxw sin3,, 23 . (15)

    Substituting Equation (15) into Equations (6) and (7) yields

    tLxxCmv

    m

    EIv iv sin3 2321

    )(

    1

    , (16)

    00 v , 00 v , 0Lv , LLL vmkvvEI 2 . (17)

    In order to derive the mode shapes and natural frequencies

    of the system, the force term is removed from Equation (16)

    and separation of variables is implemented. The mode shapes

    and natural frequencies can be found by solving the following

    equations:

    0

    21)( EI

    wmiv, (18)

    00 , 00 , 0L , 22mkEI LL (19)

    Solving Equation (18) and (19) results in

    xxAx nnnn sinhsin

    LL

    LLxx

    nn

    nnnn

    coshcos

    sinhsincoscosh , (20)

    where n = 1, 2, indicating the number of the mode, and n are the roots of the following frequency equation:

    LL nn coshcos1

    0sinhcoscoshsin3

    LLLLEI

    nnnn

    n

    (21)

    where

    1

    4

    4

    2m

    EI

    Lmk n

    , (22)

    4 1

    2

    EI

    mnn

    , (23)

    where n are the natural frequencies. The value for An can be found using the orthogonality condition.

    Now the Galerkin method can be utilized. The Galerkin

    method uses the following definition:

    m

    i

    ii tqxtxw

    1

    , . (24)

    After substituting Equation (24) into Equation (16), multiplying

    by j, integrating on the limits zero to L, simplifying, and putting the equation into matrix form, the two term Galerkin

    solution becomes two coupled ordinary differential equations:

    2

    1

    2

    1

    2221

    1211

    2

    1

    f

    f

    q

    q

    kk

    kk

    q

    q

    , (25)

    where

    L

    ivdxEIk

    01

    )(111 , (26)

    L

    ivdxEIk

    01

    )(212 , (27)

    L

    ivdxEIk

    02

    )(121 , (28)

    L

    iv dxEIk0

    2)(

    222 , (29)

    L

    dxLxxtCmf0

    231

    211 3sin2 , (30)

    L

    dxLxxtCmf0

    232

    212 3sin2 , (31)

    The full results for Case 1 are obtained mathematically.

    Section 3 will discuss the computational results.

    2.2 Case 2: Force considered in Equation of Motion

    The second case to be examined, as stated previously, is a

    system including a spring at the free end where the contact

    force is considered to be a concentrated force that is applied in

    the equations of motion and the boundary conditions are similar

    to those of a free end cantilever. Hamiltons principle is, again, utilized to derive the equation of motion and boundary

    conditions for this case. The equation for the kinetic energy of

    the beam, Equation (2), and the equation for the potential

    energy of the beam, Equation (3), are the same as in the first

    case. Equation (4), the potential energy of the spring, will not

    be included in this case. Instead, the work done by the spring

    force is included in the external work term resulting in

    L

    Next wdxLxHfW0

    , (32)

    where

    tFkwf LN sin , (33)

    and H() is the Heaviside function and is defined as

    01

    00)(

    H . (34)

    Substituting Equations (2), (3), and (32) into Equation (1) and

    simplifying results in the equations of motion and boundary

    conditions for bending vibrations of the microcantilever of the

    AFM shown in Figure 1 for Case 2:

    LxHfEIwwm N

    iv )(1 , (35)

    00 w , 00 w , 0Lw , LL wmwEI 2 . (36)

    In order to derive the mode shapes of the system, the force term

    is removed from Equation (35). The derivations will be similar

    to Case 1. Equation (35) becomes the same as Equation (16).

    Solving the differential equation and applying boundary

    conditions then results in the solution for our mode shapes.

    Following the process used in Section 2.1, the resulting mode

    shape equation can be solved and is the same as Equations (20),

    (21), and (23). The only difference is Equation (22) defining , which becomes

  • 1

    4

    4

    2m

    EI

    Lm n

    . (37)

    With the mode shapes and natural frequencies determined, the

    full response of the system can be found, again, using the

    Galerkin method. Following the same procedure as in Section

    2.1, the two term Galerkin solution becomes two coupled

    ordinary differential equations of the same form as Equation

    (27) where

    )(21

    01

    )(111 LkdxEIk

    Liv , (38)

    )()( 21

    01

    )(212 LLkdxEIk

    Liv , (39)

    )()( 21

    02

    )(121 LLkdxEIk

    Liv , (40)

    )(22

    02

    )(222 LkdxEIk

    Liv , (41)

    L

    dxtFf0

    11 sin , (42)

    L

    dxtFf0

    22 sin , (43)

    The full results for both cases are obtained mathematically.

    Section 3 will discuss and compare the computational results.

    III. RESULTS The complete mechanics of a microcantilever beam have

    been obtained mathematically for two possible cases most

    commonly used in research works. The first case considers

    contact force at the tip to be a part of the boundary conditions

    of the beam. The second case assumes that the force is a

    concentrated force that is applied in the equations of motion,

    and the boundary conditions are similar to the free end of a

    cantilever beam. First, the equations of motion were found

    using Hamiltons Principle. Next, the natural frequencies and mode shapes were found. Finally, the time response was

    calculated using the Galerkin Method.

    In this section, numerical results are presented. See Chart 1

    for the values of properties used in the numerical analysis.

    Table 1. Microcantilever Properties

    Constant Value

    L (m) 1255 E (GPa) 1605 I (m

    4) 1545

    m1 (mg/m) 34.82 m2 (pg) 2.190.5

    k (x10-15

    N/m) 75.43 F (N) 10.1

    Beam values are found on the Bruker AFM Probe web site

    [27]. The spring constant, k, is actually somewhere in the range

    of -15 to 40 N/m [28]. However, this value of the spring, which

    represents the contact force only, acts over a very small range

    of motion when the tip is in contact with the sample. Therefore,

    a number with a similar magnitude as the effect of the tip mass

    was selected to represent the small nature of the spring force.

    Since the excitation force, F, can only be found experimentally,

    a reasonable value of 1 N is chosen. First, the results of the two methods for calculating the

    natural frequencies are compared. The natural frequencies are

    found by numerically solving Equations (21), (22), and (23) for

    Case 1 and Equations (21), (23), and (37) for Case 2. Chart 2

    shows that the two methods resulted in very similar results.

    Table 2. Natural Frequencies of the Microcantilever Beam for Case 1 and 2

    Natural Frequency

    Case 1 Case 2 Percent

    Difference

    1 (kHz) 301.25 301.26 0.003 2 (kHz) 1,887.9 1,888.0 0.005

    The nominal value of natural frequency was found on the

    Bruker AFM Probe web site [27]. The nominal frequency is

    300 kHz with a possible range of 200 to 400 kHz. Also, the

    experimental value of natural frequency was found using the

    Bruker Innova software and Bruker Innova AFM with MPP-

    11123-10 microcantilever. The experimental first natural

    frequency was found to be 360.3 kHz. The experimental value

    is within Brukers range. Since the other properties are set to the nominal values from Brukers website, it is most reasonable to compare the numerical natural frequency to the nominal

    natural frequency from the website. This results in an error of

    0.42%.

    Next, the mode shapes for the two different cases are

    compared. Both Cases use Equation (20) to find the mode

    shapes. Figures (2) and (3) are the first two modes for Case 1

    and Case 2, respectively.

    Figure 2. The first and second mode shapes, 1 and 2, for Case 1.

    0 20 40 60 80 100 120 140-0.4

    -0.2

    0

    0.2

    0.4

    x (m)

    n (

    x10

    6)

    1

    2

  • Figure 3. The first and second mode shapes, 1 and 2,

    for Case 2.

    The maximum difference between these two cases is

    essentially zero. There is practically no difference between

    mode shapes of the two cases.

    Next, the time dependent functions, q1 and q2, are

    compared. The Matlab solver ode23 was used to numerically

    solve Equations (25) through (31) for Case 1 and Equations (25

    and (38) through (43) for Case 2. See Figures 4 and 5 (Case 1)

    and Figures 6 and 7 (Case 2) for graphical representations of q1

    and q2.

    Figure 4. The first and second time response functions, q1

    and q2, for Case 1.

    Figure 5. A zoomed in view of the first and second time

    response functions, q1 and q2, for Case 1.

    Figure 6. The first and second time response functions, q1

    and q2, for Case 2.

    Figure 7. A zoomed in view of the first and second time response functions, q1 and q2, for Case 2.

    As can be seen in Figures 4 through 7, the results for the

    two different methods are similar. The main difference is the

    amplitude of the first time function, q1. Comparison of the

    amplitudes after the initial settling time reveals that the

    magnitude of Case 1 is approximately 20.3% larger than Case

    2. The second time function, q2, is much smaller than q1 in both

    cases and will not contribute as much to the final response.

    Finally, the complete response can be found using

    Equations (15) and (24) for Case 1 and Equation (24) for Case

    2. The results are shown in Figures 8 and 9 for Cases 1 and 2,

    respectively.

    Figures 8 and 9 show the complete microcantilever beam

    mechanics at the free end of the beam as derived in Section 2.

    The two graphs are very similar with the most noticeable

    difference, again, being the amplitude. The amplitude of the

    complete response, w(L,t), after the settling period is

    approximately 19.6% larger in Case 1.

    All of the steps detailed so far in Section 3 are repeated for

    Case 1 and 2 without tip mass. The complete response for the

    tip of the beam, w(L,t), is shown in Figures 10 and 11 without

    tip mass included.

    0 20 40 60 80 100 120 140-0.4

    -0.3

    -0.2

    -0.1

    0

    0.1

    0.2

    0.3

    0.4

    x (m)

    n (

    x10

    6)

    1

    2

    0 1 2 3 4 5 6-50

    0

    50

    time (ms)

    qn (

    x10

    -12)

    q2

    q1

    5 5.01 5.02 5.03 5.04 5.05-30

    -20

    -10

    0

    10

    20

    30

    time (ms)

    qn (

    x10

    -12)

    q2

    q1

    0 1 2 3 4 5 6-50

    0

    50

    time (ms)

    qn (

    x10

    -12)

    q2

    q1

    5 5.01 5.02 5.03 5.04 5.05-30

    -20

    -10

    0

    10

    20

    30

    time (ms)

    qn (

    x10

    -12)

    q2

    q1

  • Figure 8. The complete response at the free end of the

    beam, w(L,t), for Case 1.

    Figure 9. The complete response at the free end of the

    beam, w(L,t), for Case 2.

    Figure 10. The complete response at the free end of the

    beam, w(L,t), for Case 1 with tip mass not included.

    A comparison of Figure 8 to Figure 10 shows that there is

    approximately a 20.1% increase in amplitude after the settling

    period when tip mass is included in Case 1. Therefore, for this

    specific beam under these conditions, tip mass is not negligible

    for Case 1. A comparison of Figure 9 to Figure 11 shows that

    there is approximately a 0.3% increase in amplitude when tip

    mass is included in Case 2. Therefore, for Case 2, tip mass is

    negligible.

    Figure 11. The complete response at the free end of the

    beam, w(L,t), for Case 2 with tip mass not included.

    CONCLUSIONS Two common ways of handling the external force applied

    to the microcantilever of an AFM are examined. The first case

    includes the external force in the boundary conditions. The

    second case includes it in the equation of motion with boundary

    conditions like that of a free end. In Section 2, the equations of

    motion are derived using Hamiltons Principle, the natural frequencies and mode shapes are determined, and the time

    response is found using the Galerkin Method.

    Section 3 details the numerical analysis. The results show

    very little to no difference when determining the natural

    frequencies using the two different methods with a maximum

    difference of 0.005% in the first two natural frequencies and a

    practically no difference in the mode shapes.

    However, the time response plots show more difference

    between the two different methods, with the amplitude having a

    difference of approximately 20.3% between the two methods.

    Similarly, the complete response of the microcantilever beam at

    the free end has a difference in amplitudes of approximately

    19.6%.

    Finally, the complete response is found again with tip mass

    set to zero. Comparing the results shows that tip mass creates a

    20.1% increase in amplitude for Case 1 and a 0.3% increase for

    Case 2.

    These results indicate that if a research work is only

    interested in finding the natural frequencies and mode shapes,

    using either method is equally reliable. Since the second

    method (including the force in the equation of motion) is a

    considerably easier method for deriving these natural

    frequencies and mode shapes, it is reasonable to use the second

    method. However, the time response presents more differences

    between the two methods and requires further examination and

    physical experimentation to validate the results and to

    determine which method is more reliable. As for the tip mass,

    the complete response was affected to a greater extent in Case 1

    than in Case 2. These results indicate that tip mass may not

    always be negligible.

    0 1 2 3 4 5 6-15

    -10

    -5

    0

    5

    10

    15

    time (ms)

    w(L

    ,t)

    (m

    )

    0 1 2 3 4 5 6-15

    -10

    -5

    0

    5

    10

    15

    time (ms)

    w(L

    ,t)

    (m

    )

    0 1 2 3 4 5 6-15

    -10

    -5

    0

    5

    10

    15

    time (ms)

    w(L

    ,t)

    (m

    )

    0 1 2 3 4 5 6-15

    -10

    -5

    0

    5

    10

    15

    time (ms)

    w(L

    ,t)

    (m

    )

  • ACKNOWLEGEMENT This material is based upon work supported by the

    National Science Foundation Graduate Research Fellowship

    under Grant No. 23478. Also, portions of this work were

    funded by the National Science Foundation, GK-12 Grant No.

    0742504

    REFERENCES

    [1] Salehi-Khojin, A., Bashash, S., and Jalili, N., Thompson, G.L., and Vertegel, A., 2009, Modeling Piezoresponse Force Microscopy for Low-Dimensional Material

    Characterization: Theory and Experiment, Journal of Dynamic Systems, Measurement, and Control 31(6), pp.

    061107(1-8).

    [2] Lekka, M., and Wiltowska-Zuber, J., 2009, Biomedical Applications of AFM, Journal of Physics: Conference Series, 146, 012023.

    [3] Mahmoodi, S.N., and Jalili, N., 2008, Coupled Flexural-Torsional Nonlinear Vibrations of

    Piezoelectrically-Actuated Microcantilevers, ASME Journal of Vibration and Acoustics, 130(6), 061003.

    [4] Holscher, H., Schwarz, U.D., and Wiesendanger, R., 1997, Modeling of the Scan Process in Lateral Force Microscopy, Surface Science, 375, 395-402.

    [5] Shibata, T., Unno, K., Makino, E., Ito, Y., and Shimada, S., 2002, Characterization of Sputtered ZnO Thin Film as Sensor and Actuator for Diamond AFM Probe, Sensors and Actuators A, 102, 106-113.

    [6] Itoh, T., Suga, T, 1993, Development of a Force Sensor for Atomic Force Microscopy Using Piezoelectric Thin

    Films, Nanotechnology, 4, 2l8-224. [7] Yamada, H., Itoh, H., Watanabe, S., Kobayashi, K., and

    Matsushige, K., Scanning Near-field Optical Microscopy Using Piezoelectric Cantilevers, Surface and Interface Analysis, 27, 503-506.

    [8] Rollier, A-S., Jenkins, D., Dogheche, E., Legrand, B., Faucher, M., and Buchaillot, L., 2010, Development of a New Generation of Active AFM Tools for Applications

    in Liquid, Journal of Micromechanics and Microengineering, 20, 085010.

    [9] Rogers, B., Manning, L., Jones, M., Sulchek, T., Murray, K., Beneschott, B., and Adams, J.D., 2003, Mercury Vapor Detection with A Self-Sensing, Resonating

    Piezoelectric Cantilever, Review of Scientific Instruments, 74(11), 4899-4901.

    [10] Seo, Y. and Jhe, W., 2008, Atomic Force Microscopy and Spectroscopy, Reports on Progress in Physics, 71, 016101.

    [11] Delnavaz, A. Mahmoodi, S.N., Jalili, N., Ahadian, M.M. and Zohoor, H., 2009, Nonlinear Vibrations of Microcantilevers Subjected to Tip-Sample Interactions:

    Theory and Experiment, Journal of Applied Physics, 106, 113510.

    [12] Chu, J., Maeda, R., Itoh, T., and Suga, T., 1999, Tip-Sample Dynamic Force Microscopy Using Piezoelectric

    Cantilever for Full Wafer Inspection, Japanese Journal of Applied Physics, 38, 71557158.

    [13] Asakawa, H., and Fukuma, T., 2009, Spurious-Free Cantilever Excitation in Liquid by Piezoactuator with

    Flexure Drive Mechanism, Review of Scientific Instruments, 80, 103703.

    [14] Mahmoodi, S.N., Daqaq, M., and Jalili, N., 2009, On the Nonlinear-Flexural Response of Piezoelectrically-

    Driven Microcantilever Sensors, Sensors and Actuators A, 153(2), 171179.

    [15] Mahmoodi, S.N., Jalili, N. and Daqaq, M., 2008, Modeling, Nonlinear Dynamics and Identification of a Piezoelectrically-Actuated Microcantilever Sensor, ASME/IEEE Transaction on Mechatronics, 13(1), 59-65.

    [16] Kuo, C., Huy, V., Chiu, C., and Chiu, S., 2012, Dynamic Modeling and Control of an Atomic Force Microscope Probe Measurement System, Journal of Vibration and Control, 18(1), 101-116.

    [17] Ha, J., Fung, R., and Chen, Y., 2008, Dynamic Responses of An Atomic Force Microscope Interacting

    with Sample, Journal of Dynamic Systems, Measurement, and Control, 127, 705-709.

    [18] El Hami, K., and Gauthier-Manuel, B., 1998, Selective Excitation of the Vibration Modes of A Cantilever

    Spring, Sensors and Actuators A, 64, 151155. [19] Chang, W. and Chu, S., 2003, Analytical Solution of

    Flexural Vibration Responses on Taped Atomic Force

    Microscope Cantilevers, Physics Letters A, 309, 133-137.

    [20] Rabe, U., Janser, K., and Arnold, W., 1996, Vibrations of Free and Surface-coupled Atomic Force Microscope

    Cantilevers: Theory and Experiment, Rev. Sci. Instrum., 67, 3281.

    [21] Jovanovic, V., A Fourier Series Solution for the Transverse Vibration Response of a Beam with a viscous

    boundary, Journal of Sound and Vibration, 2010, doi: 10.1016/ j.jsv.2010.10.007

    [22] Hilal, M. A. and Zibdeh, H. S., 2000, Vibration Analysis of Beams with General Boundary Conditions

    Traversed by a Moving Force, Journal of Sound and Vibration, 229(2), 377-388.

    [23] Bahrami, A. and Nayfeh, A. H., 2012, On the Dynamics of Tapping Mode Atomic Force Microscoe Probes, Nonlinear Dyn, 70, 1605-1617.

    [24] Abdel-Rahman, E. M. and Nayfeh, A. H., 2005, Contact Force Identification Using the Subharmonic Resonance

    of a Contact-Mode Atomic Force Microscopy, Nanotechnology, 16, 199-207.

    [25] Bahrami, A. and Nayfeh, A. H., 2013, Nonlinear Dynamics of Tapping Mode Atomic Force Microscopy

    in the Bistable Phase, Commun Nonlinear Sci Numer Simulat, 18, 799-810.

    [26] Abdel-Rahman, E. M. and Nayfeh, A. H., 2005, Contact Force Identication Using the Subharmonic Resonance of a Contact-Mode Atomic Force Microscopy, Nanotechnology, 16, 199-207.

  • [27] Bruker AFM Probes. Web. http://www.brukerafmprobes.com/

    [28] Hlscher, H., 2006, Quantitative Measurement of Tip-Sample Interactions in Amplitude Modulation Atomic

    Force Microscopy, Applied Physics Letters, 89, 123109.

  • ASME District F - Early Career Technical Conference Proceedings ASME District F - Early Career Technical Conference, ASME District F ECTC 2013

    November 2 3, 2013 - Birmingham, Alabama USA

    ROBOT AUTONOMOUS NAVIGATION CONTROL ALGORITHMS FOR VARIOUS TYPES OF TASKS PERFORMED IN THE X-Y PLANE

    Stephen Armah, Sun Yi

    North Carolina A&T State University Greensboro, NC, USA

    ABSTRACT Mobile robot control has attracted considerable attention of

    researchers in the areas of robotics and autonomous systems

    during the last decade. One of the goals in the field of mobile

    robotics is the development of mobile platforms that robustly

    operate in populated environments and offer various services to

    humans. Autonomous mobile robots need to be equipped with

    appropriate control systems to achieve the goal. Such control

    systems are supposed to have navigation control algorithms that

    will make mobile robots successfully move to a point, move to a pose, follow a path, follow a wall and avoid obstacles (stationary or moving). Also, robust visual tracking algorithms to detect objects and obstacles in real-time have to be integrated

    with the navigation control algorithms. The research uses a Simulink model of the kinematics of a

    unicycle, kinematically equivalent to a differential drive

    wheeled mobile robot, developed by modifying the bicycle

    model in [3]. Control algorithms that enable these robot

    operations are integrated into the Simulink model. Also, an

    effective navigation architecture that combines the go-to-goal, avoid-obstacle, and follow-wall controllers into a full navigation system is presented. A MATLAB robot simulator is

    used to implement this navigation control algorithm. The robot

    in the simulator moves to a goal in the presence of convex and

    non-convex obstacles. Finally, experiments are carried out

    using a ground robot, Dr Robot X80SV, in a typical office

    environment to verify successful implementation of the

    navigation architecture algorithm by modifying a program

    available in [9].

    Keywords: Wheeled mobile robots, PID-feedback control,

    Navigation control algorithm, Differential drive, Hybrid

    automata

    INTRODUCTION A mobile robot is an automatic machine that is capable of

    movement in any given environment. Wheeled mobile robots

    (WMRs) are increasingly present in industrial and service

    robotics, particularly when exible motion capabilities are required on reasonably smooth grounds and surfaces. Several

    mobility congurations (wheel number and type, their location and actuation, and single- or multi-body vehicle structure) can

    be found in different applications [4]. The most common for

    single-body robots are dierential drive and synchro drive (both kinematically equivalent to a unicycle), tricycle or car-

    like drive, and omnidirectional steering [4].

    The presented research uses control algorithms that make

    mobile robots move to a point, move to a pose, follow a line, follow a circle and avoid obstacles taken from the literature . The main focus of the research is the navigation control algorithm that has been developed to enable

    a differential drive wheeled mobile robot (DDWMR) to

    accomplish its assigned task of moving to a goal free from any

    risk of collision with obstacles. In order to develop this

    navigation system, a low-level planning is used, based on a

    simple model whose input can be calculated using a PID

    controller or transformed into actual robot input.

    Simulation results using Simulink models and MATLAB

    and a robot simulator that implements these algorithms are

    presented. The robot simulator is able to move to a goal in the

    presence of convex and non-convex obstacles. Also, several

    experiments are performed using a ground robot, Dr Robot

    X80SV, in a typical office environment, to verify successful

    implementation of the navigation architecture algorithm using a

    modified program developed in C# by Dr Robot Inc [9].

    Possible applications of WMR include security robots, land

    mine detectors, planetary exploration missions, Google

    autonomous car, autonomous vacuum cleaners, and lawn

    mowers, etc. [6].

    1.1 Organization of contents

    In Section 2 the methodology used for the research is

    presented. Section 2.1 introduces the kinematic model of a

    DDWMR and the control algorithms applied. For the purposes

    of implementation, the kinematic model of a unicycle is also

    introduced in this section. In Section 2.2 controllers built for

    the different WMR behaviors are presented, such as move to a point, move to a pose, follow a line, follow a circle, and avoid obstacles. Also in Section 2.2 a control navigation algorithm that enables go-to-goal, follow-wall and avoid obstacles is presented. In Section 3 simulations and experiments performed are explained. In Section 4 the

    simulations and experimental results are summarized.

    Concluding remarks and future work are presented in Section 5.

  • KINEMATIC MODELING AND CONTROL ALGORITHMS

    2.1 Kinematic Modeling of DDWMR The DDWMR setup used for the presented study is shown

    in Figure 1 (top view). The configuration of the vehicle is

    represented by the generalized coordinates , where is the global position and is the heading. The vehicles velocity is by definition in the vehicles -direction, is the distance between the wheels, is the radius of the wheels, is the right wheel angular velocity, is the left wheel angular velocity and is the heading rate. Let and be the world and vehicle frames respectively.

    Figure 1. DDWMR Setup

    The kinematic model of the DDWMR based on the coordinate

    is given by

    For the purpose of implementation, the kinematic model of a

    unicycle is used; it corresponds to a single upright wheel rolling

    on a plane, with the equation of motion given as

    The inputs in and are and . These inputs are related as

    A Simulink model shown in Figure 2 has been developed;

    it implements the kinematic model of the unicycle by

    modifying the bicycle model in [3].

    Figure 2. Simulink Model for the Unicycle

    2.2 Control Algorithms: Building Behaviors Control of the unicycle model inputs is about selecting the

    appropriate input, , and applying the traditional PID-feedback controller, given by

    where , defined for each task below, is the error between the desired value and the output value, is the proportional gain, is the integrator gain, is the derivative gain and is time. The control gains used in this research are obtained by

    tweaking the various values to obtain satisfactory responses. If

    the vehicle is driven at a constant velocity, then the control input will only vary with the angular velocity, , thus

    2.2.1 Developing Individual Controllers This section presents control algorithms that make mobile

    robots move to a point, move to a pose, follow a line, follow a circle and avoid obstacles.

    2.2.1.1 Moving to a Point Consider a robot moving toward a goal point, ,

    from a current position, , in the -plane, as depicted in Figure 3 below.

    Figure 3. Setup for Moving to a Point

    The desired heading (robots relative angle), , is determined as

    and

    Due to the problem of angles, a corrected error, is used instead of as shown below

    Thus can be controlled using . If the robots velocity is to be controlled, a proportional controller gain, , is applied to the distance from the goal, shown below [2]

    2.2.1.2 Moving to a Pose The above controller could drive the robot to a goal

    position, but the final orientation depends on the starting

    position. In order to control the final orientation is rewritten in matrix form as

    3

    theta

    2

    y

    1

    xw

    limit

    v

    limit

    acceleration

    limit1

    acceleration

    limit

    sin

    cos

    Product

    1

    s

    Integrator1

    1

    s

    Integrator

    handbrake

    2

    w

    1

    v

  • is then transformed into the polar coordinate form using the notation shown in Figure 4.

    Figure 4. Setup for Moving to a Pose

    Applying a change of variables, we have

    which results in

    and assumes the goal is in front of the vehicle. The linear control law

    drives the robot to unique equilibrium at . The intuition behind this controller is that the terms and drive the robot along a line toward , while the term rotates the line so that . The closed-loop system

    is stable so long as . For the

    case where the goal is behind the robot, that is

    the

    robot is reversed by negating and in the control law. The velocity always has a constant sign which depends on the initial value of . 2.2.1.3 Obstacle Avoidance

    In a real environment robots must avoid obstacles in order

    to go to a goal. Depending on the positions of the goal and the

    obstacle(s) relative to the robot, the robot must move to the

    goal using from a pure go-to-goal behavior or blending the avoid obstacle and the go-to-goal behaviors. In pure obstacle avoidance the robot drives away from the obstacle and

    moves in the opposite direction. The possible values of that can be used in the control law discussed in 2.2.1.1 are shown in

    Figure 5 below, where is the obstacle heading.

    Figure 5. Setup for Avoiding Obstacle

    2.2.1.4 Following a Line Another useful task for a mobile robot is to follow a line on

    a plane defined by . This requires two controllers to adjust the heading. One controller steers the robot

    to minimize the robots normal distance from the line given by

    The proportional controller

    turns the robot toward the line. The second controller adjusts

    the heading angle to be parallel to the line

    using the proportional controller

    The combined control law

    turns the wheel so as to drive the robot toward the line and to

    move along it .

    2.2.1.5 Following a Circle Instead of a straight line, the robot can follow a defined

    path on the -plane, and in this section the robot follows a circle. This problem is very similar to the control problem

    presented in 2.2.1.1, except that this time the point is moving.

    The robot maintains a distance behind the pursuit point and an error, , can be formulated as [2]

    that will be regulated to zero by controlling the robots velocity

    using a PI controller

    - -

  • The integral term is required to provide a finite velocity

    demand when the following error is zero. The second controller steers the robot toward the target which is at the

    relative angle given by , and a controller given by . 2.2.2 Developing Navigation Control Algorithm

    This section introduces how the navigation architecture,

    that consists of go-to-goal, follow-wall and avoid obstacle

    behaviors, was developed. In order to develop the navigation

    system a low-level planning was used, by starting with a simple

    model whose input can be calculated by using a PID controller

    or transformed into actual robot input, depicted in Figure 6 . For this simple planning a desired motion vector, , is picked and set equal to the input, , shown in Figure 7.

    Figure 6. Planning Model Input to Actual Robot Input

    This selected system is controllable as compared to the

    unicycle system which is non-linear and not controllable even

    after it has been linearized. This layered architecture makes the

    DDWMR act like the point mass model shown in .

    2.2.2.1 Go-to-goal (GTG) Mode Consider the point mass moving toward a goal point, ,

    with current position as in the -plane. The error, , is controlled by the input , where is gain

    matrix. Since the system is asymptotically stable if . An appropriate is selected to obey the function shown in Figure 8 above such that , where and are constants to be selected; in this way the robot will not go faster further away from the goal .

    2.2.2.2 Obstacle Avoidance (AO) Mode Let the obstacle position be , then is

    controlled by the input , and since the system is desirably unstable if . An appropriate is selected to obey the function shown in Figure 9 above such that , where and are constants to be selected . 2.2.2.3 Blending AO and GTG Modes

    In a pure GTG mode, , or pure AO mode, , or what is termed as hard switches, performance can be

    guaranteed, but the ride can be bumpy and the robot can

    encounter the zeno phenomenon . A control algorithm for blending the and modes is given by . This algorithm ensures a smooth ride but does not guarantee

    performance .

    where is a constant distance to the obstacle/boundary, and is the blending function to be selected, giving appropriately as

    an exponential function by

    where is a constant to be selected. 2.2.2.4 Follow-wall (FW) Mode

    As pointed out in Section 2.2.2.2, in a pure obstacle

    avoidance mode the robot drives away from the obstacle and

    move in the opposite direction, but this is overly cautious in a

    real environment where the task is to go to a goal. The robot

    should be able to avoid obstacles by going around its boundary,

    and this situation leads to what is termed as the follow-wall or

    an induced or sliding mode, , between the and modes; this is needed for the robot to negotiate complex

    environments . The FW mode maintains to the obstacle/boundary as if it is following it, and the robot can clearly move in two different

    directions, clockwise (c) and counter-clockwise (cc), along the

    boundary, Figure 10 . This is achieved by rotating by and to obtain

    and respectively, and then

    scaled by to obtain a suitable induced mode, as shown in to .

    The direction the robot selects to follow the boundary is

    determined by the direction of , and it is determined using the dot product of and , as shown in and .

    Another issue to be addressed is when the robot releases

    , that is when to stop sliding. The robot stops sliding when

    TRACK: PID Reference

    Trajectory

    Actual Trajectory

    TRANSFORM

    Figure 7. Point Mass Model

    Figure 8. Suitable Graph for an Appropriate

    Figure 9. Suitable Graph for an

    Appropriate

    Figure 10. Setup for Follow-wall Mode

    PLAN

  • enough progress has been made and there is a clear shot to the goal, as shown in and , where is the time of last switch .

    2.2.2.5 Implementation of the Navigation Algorithms The behaviors or modes discussed above are put together

    to form the navigation architecture shown in Figure 11 below.

    The robot started at the state and arrived at the goal , switching between the three different operation modes; this

    system of navigation is termed the hybrid automata (HA) where

    the navigation system has been described using both the

    continuous dynamics and the discrete switch logic .

    Figure 11. Setup for Navigation Architecture

    An illustration of this navigation system is shown in Figure 12,

    where the robot avoids a rectangular block as it moves to a

    goal.

    Figure 12. Illustration of the Navigation System

    2.2.2.6 Tracking and Transformation of the Simple Model Input

    The simple planning model input, can be

    tracked using a PID controller or clever transformation can be

    used to transform it into the unicycle model input, . These two approaches are discussed below.

    Method 1: Tracking Using a PID Controller

    Let the output from the planning model be

    and the current position of the point mass be ,

    Figure 13, then the input, , to the unicycle model can be determined as shown below

    From

    Method 2: Transformation

    In this clever approach a new point, , of interest is selected on the robot at a distance from the center of mass, , as shown in Figure 14 , where

    and

    .

    If the orientation is ignored then

    Thus

    Substituting into , and using and , we have

    3. SIMULATIONS AND EXPERIMENTATION 3.1 Simulations Using MATLAB/Simulink

    Simulink models that implement the control algorithms

    discussed in Section 2 are presented in Figures 15 to 18. These

    models are based on the unicycle model in Figure 2.

    Possible robot path

    Enough progress and clear shot position

    Position where robot start to follow the wall

    Figure 13. Driving the Point Mass in a Specific Direction

    Figure 14. DDWMR Model Showing the New Point

  • Figure 15. Model that Drives the Robot to a Point

    Figure 16. Model that Drives the Robot to a Pose

    Figure 17. Model that Drives the Robot along a Line

    Figure 18. Model that Moves the Robot in a Unit Circle

    Figure 19 (a-d). Sequence Showing the MATLAB Robot Simulator Movement

    A MATLAB robot simulator developed by [10] was used

    to simulate the navigation architecture control algorithms

    presented in Section 2; it combines the GTG, AO, and FW

    controllers into a full navigation system. Figure 19(a-d) shows

    a sequence of simulator movements. The robot navigates

    around a cluttered, complex environment without colliding with

    any obstacles and reaching its goal location successfully.

    The algorithm controlling the simulator implements a finite

    state machine (FSM) to solve the full navigation problem. The

    FSM uses a set of if/elseif/else statements that first check which

    state (or behavior) the robot is in, and then based on whether an

    event (condition) is satisfied, the FSM switches to another state

    or stays in the same state, until the robot reaches its goal . The robot simulator mimics the Khepera III (K3) mobile

    robot, which has infrared (IR) range sensors nine are located in a ring around it and two are located on the underside. The IR

    sensors are complemented by a set of five ultrasonic sensors.

    The K3 has a two-wheel differential drive with a wheel encoder

    for each wheel, for measuring the robot position .

    Figure 20. Dr Robot X80SV used for the Experiments

    3.2 Experimentation Using Dr Robot X80SV

    The control algorithm built for the navigation architecture

    presented in Section 2 was experimented on Dr Robot X80SV,

    using a modified program developed in C# by , in an office environment. The Dr Robot X80SV can be made to move to a

    goal, able to wander around the environment, and follow a

    given path whiles avoiding obstacles in front of it.

    The Dr Robot X80SV, shown in Figure 20, is a fully

    wireless networked system that uses two quadrature encoders

    on each wheel for measuring its position and seven IR and three

    ultrasonic range sensors for collision detection. It has a 2.6x

    high resolution Pan-Tilt-Zoom CCD camera with two-way

    audio capability, two 12V motors with over 22kg.cm torque

    each, and two pyroelectric human motion sensors.

    Its dimension is 38cm (length) x 35cm (width) x 28cm

    (height), maximum payload of 10kg (optional 40 kg) with robot

    weight of 3kg; its 12V 3700mAh battery pack has three hours

    nominal operation time for each recharging, and it can drive up

    to a maximum speed of 1.0 m/s. The distance between the

    wheels is 26cm and the radius of the wheels is 8.5cm.

    Figure 21. Closed-loop Feedback System for Controlling

    the DC Motor of the Dr Robot X80SV [9]

    sqrt(u(1)^2+u(2)^2)

    throttle

    atan2(u(2), u(1))

    steeringKh

    gain1

    Kv

    gain

    +

    -dif f

    angular

    difference

    XY

    v

    w

    x

    y

    theta

    Unicycle

    xg

    Goal

    xy

    theta

    q

    rho

    alpha

    beta

    direction

    to polar

    Krh

    gain2

    Kbe

    gain1

    Kal

    gain

    [xg(1:2) 0]

    desired pose

    xg(3)

    desired headingXY

    v

    w

    x

    y

    theta

    Unicycle

    STOP

    |u|

    Interpreted

    MATLAB Fcn

    direction of motion: +1 or -1

    slope of l ine

    Kh

    gain1

    Kd

    gain

    f(u)

    distance from line1

    constant+

    -dif f

    angular

    difference

    XY

    v

    w

    x

    y

    theta

    Unicycle

    atan2(-L(1),L(2))

    d

    sqrt(u(1)^2+u(2)^2)-0.1

    throttle

    atan2(u(2), u(1))

    steeringKh

    gain

    +

    -dif f

    angular

    difference

    XY

    v

    w

    x

    y

    theta

    Unicycle

    PID

    PID Controller

    xy

    Circle

    goal

    xy

    PID Controller DC Motor

    Potentiometer

  • The feedback system depicted in Figure 21 shows how the

    DC motor system of the robot is controlled. A screenshot of the

    C# interface used for the Dr Robot X80SV control is shown in

    Figure 22(a and b).

    Figure 22 (a). C# Graphic Interface: Main Sensor

    Information and Sensor Map

    Figure 22 (b). C# Graphic Interface: Path Control

    Figure 23(a-f) shows an experimental setup showing a

    sequence of movement of the Dr Robot X80SV implementing

    the navigation control algorithm.

    Figure 23 (a-f). Experimental Setup Showing Sequence of the Dr Robot X80SV Movement

    4. RESULTS AND DISCUSSION

    4.1 Simulink Models Results The time domain Simulink simulations were carried out

    over a seconds duration for each model (refer to the simulation setups in Figures 15 to 18). The trajectories in

    Figure 24 were obtained by using proportional gains of and for and respectively; the final goal point was , compared to the desired goal of for

    the initial state.

    Trajectories in Figure 25 were obtained using ,

    and ; the final pose was , compared to the desired pose of

    for the initial state. The trajectories in Figure 26 were obtained with

    and , driving the robot at a constant speed of . The trajectory in Figure 27 was obtained using , PID controller gains of and , and the goal is a point generated to move around the unit circle with a

    frequency of Hz.

    -1.5 -1 -0.5 0 0.5 1 1.5-1.5

    -1

    -0.5

    0

    0.5

    1

    1.5

    x

    y

    Robot path

    Circle

    Initial state

    Figure 25. Move to a Pose Figure 24. Move to a Point

    Figure 26. Follow a Line

    0 1 2 3 4 5 6 7 8 9 100

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    x

    y

    Line

    Robot path

    Initial state

    0 1 2 3 4 5 6 7 8 9 100

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    x

    y

    Initial state

    Robot path

    Desired goal

    0 1 2 3 4 5 6 7 8 9 100

    1

    2

    3

    4

    5

    6

    7

    8

    9

    10

    x

    y

    Desired pose

    Robot path

    Initial state

    Figure 27. Follow a Circle

  • 4.2 MATLAB Robot Simulator Results The trajectory shown in Figure 28a (refer to the simulation

    setup in Figure 19) was obtained by using PID controller gains

    of and , , , , initial state of , and desired goal of (1,1). The final goal point associated with the simulation was (1.0077,

    0.9677). Even though there is steady-state error in both states,

    and , the result is encouraging, taken into account that a stopping distance, of was specified.

    Figure 28 (a and b). Trajectory in -plane (a) Robot Simulator (b) Experiment

    4.3 Experimental Results

    The trajectory shown in Figure 28b (refer to a similar

    experimental setup in Figure 23) was obtained by using PID

    controller gains of and for the position control and and for the velocity control, , , initial state of , and desired pose of . The final pose associated with the experiment was . Even though there were steady-state errors in the state values, the result is

    encouraging. Possible causes of the errors are friction between

    the robot wheels and the floor, imperfection in the sensors

    and/or unmodeled factors (e.g., friction and backlash) in the

    mechanical parts of the DC motor.

    Note that during the experiments the robot sometimes got

    lost or wandered around before arriving at the desired pose.

    This is because the navigation system is not robust. It was built

    using a low-level planning based on a simple model of a point

    mass and the application of a linear PID controller.

    5. CONCLUSION In this paper, an effective navigation control algorithm was

    presented for a DDWMR, simulated using a MATLAB robot

    simulator that mimics the robot and implemented on the Dr Robot X80SV platform using C#. The algorithm makes the

    robot move to a pose, able to wander around an office

    environment, and followed a given path whiles avoiding

    obstacles in the way.

    This research paper has also demonstrated how to make a

    unicycle, kinematically equivalent to DDWMR, move to a point, move to a pose, follow a line, move in a circle, and avoid obstacles. These were simulated using Simulink models.

    In future, the authors will integrate real-time vision-based

    object detection and recognition to address the imperfection of

    the IR and ultrasonic range sensors. In addition, one or more

    parameters such as length of path, energy consumption or

    journey time optimization will be considered. The authors also

    aim to design robust control algorithms with respect to time

    delays. Furthermore, application of a high-level planning such

    as Dijkstra, A*, D* or RRT for the robot path planning, and the

    application of a non-linear control law such as adaptive control

    to address some of the deficiencies of the navigation system

    will be studied.

    Another possible future work is to make the WMR

    collaborate with unmanned aerial vehicles (UAV) for a range of

    potential applications that include search and rescue,

    surveillance, hazardous site inspection, planetary exploration

    missions and domestic policing.

    ACKNOWLEDMENT This work was supported by the NC Space Grant.

    REFERENCES

    [1] Borenstein, J. & Koren, Y. (1989). Real-time obstacle

    avoidance for fast mobile robots. Systems, Man and

    Cybernetics, IEEE Transactions, 19, pp 1179-1187.

    [2] Corke, P. (2011). Robotics, vision and control: Fundamental

    algorithms in matlab. Berlin: Springer, pp. 61-78.

    [3] Corke, P. (1993-2011). Robotics Toolbox for MATLAB

    (Version 9.8) (Software). Retrieved from:

    http://www.petercorke.com/RVC

    [4] De Luca, A., Oriolo G. & Vendittelli M. (2001). Control of

    wheeled mobile robots: An experimental overview. Berlin:

    Springer, pp 181-226.

    [5] Egerstedt, M. (2013). Control of mobile robots [PowerPoint

    slides]. Retrieved from: https://class.coursera.org/conrob-

    001/class/index

    [6] Jones J. L., Flynn A. M. (1993). Mobile robots: Inspiration

    to implementation. A K Peters, Wellesley, MA

    [7] Katsuhiko, O. (2002). Modern control engineering. Fourth

    Edition. New Jersey: Prentice-Hall, pp 681-700.

    [8] Siegwart, R., Nourbrakhsh, I. & Scaramuzza, D. (2004).

    Introduction to autonomous mobile robots. Second Edition.

    Massachusetts: The MIT Press, pp. 91-98.

    [9] C# Advance X80 Demo Program (Software). (2008). Dr

    Robot. Retrieved from: http://www.drrobot.com

    [10] MATLAB Robot Simulator (Software). (2013). Georgia

    Institute of Technology, Georgia: Georgia Tech Research

    Corporation. Retrieved from:

    http://gritslab.gatech.edu/projects/robot-simulator/

    -0.8 -0.6 -0.4 -0.2 0 0.2 0.4 0.6 0.8 1 1.2-0.4

    -0.2

    0

    0.2

    0.4

    0.6

    0.8

    1

    1.2

    x

    y

    Desired goal

    Initial state

    Final goal

    Robot path

    -0.5 0 0.5 1 1.5 2 2.5-0.2

    -0.1

    0

    0.1

    0.2

    0.3

    0.4

    0.5

    x

    y

    Desired pose

    Initial state

    Final pose

    Robot path

  • ASME District F - Early Career Technical Conference Proceedings ASME District F - Early Career Technical Conference, ASME District F ECTC 2013

    November 2 3, 2013 - Birmingham, Alabama USA

    ROBOTIC BEHAVIOR PREDICTION USING HIDDEN MARKOV MODELS

    ABSTRACT There are many situations in which it would be

    beneficial for a robot to have predictive abilities similar to

    those of rational humans. Some of these situations include

    collaborative robots, robots in adversarial situations, and

    for dynamic obstacle avoidance. This paper presents an

    approach to modeling behaviors of dynamic agents in order

    to empower robots with the ability to predict the agent's

    actions and identify the behavior the agent is executing in

    real time. The method of behavior modeling implemented

    uses hidden Markov models (HMMs) to model the

    unobservable states of the dynamic agents. The background

    and theory of the behavior modeling is presented.

    Experimental results of realistic simulations of a robot

    predicting the behaviors and actions of a dynamic agent in

    a static environment are presented.

    INTRODUCTION Robots are becoming increasingly autonomous,

    requiring them to operate in more complicated

    environments and assess complex phenomena in the world

    around them. The ability to determine the behavior of

    humans, other robots, and obstacles efficiently would

    benefit many fields of robotics. Some such fields include

    rescue robots, collaborative or assistive robots, multi-robot

    systems, military robots, and autonomous vehicles. The

    process of a robot determining the behavior or intent of

    other agents is inherently probabilistic since the robot has

    no way of knowing the intent of the other agent. The robot must make a best guess based on its observations of

    the agents states. Yet, the same behavior may manifest itself as different sequences in the change of state of the

    agent being observed. Additionally, robots observe the state

    of the agent via noisy sensors. This results in a doubly

    stochastic process where an agents behavior is probabilistically dependent on the sequence of state

    changes it undergoes, and the observations seen by the

    robot depend probabilistically on the state the agent is in. A

    good model for the behavior of an agent being observed,

    therefore, would incorporate both of these types of

    uncertainty. One such model is a hidden Markov model

    (HMM). A HMM is a statistical model that is a Markov

    process with states that cannot be directly observed, i.e.

    they are hidden. The basic theory of hidden Markov models was developed and published in the early 1970s by Baum and his colleagues [1]-[2]. In the following several

    years HMMs started being applied to speech processing by

    Jelinek and others [3],[8],[9]. Since the theories were

    originally published in mathematics journals, HMM

    techniques were mostly unknown to engineers until

    Rabiner [13] published his popular tutorial in 1989

    thoroughly describing HMMs and some applications in

    speech processing. Research in other applications has

    begun to emerge.

    In some previous works, there has been a focus on

    predicting the behaviors of human drivers. In [4] and [5],

    the authors researched in-vehicle systems that predict the

    operators driving intent, but these systems have direct access to the drivers inputs to the vehicle (i.e. steering angle, accelerator depression). In [6] Han and Veloso use

    hidden Markov models to predict the behaviors of soccer

    playing robots and utilize the probability of being in certain

    accept or reject states as the condition for establishing occurrence of a behavior. In another application to robotic

    soccer, [11] uses sequential pattern mining to look for mappings between currently occurring patterns and stored

    predicate patterns. In [7] the authors model human

    trajectories based on position observations at certain time

    intervals. A HMM approach is used in [10], where the

    authors study a mobile robot observing interactions

    between two humans using a camera.

    In this paper, a behavior modeling approach using

    hidden Markov models is introduced that utilizes event

    based observations. A novel method of determining

    whether any of a set of non-exclusive behaviors are

    occurring is described. The performance of these models is

    Alan J. Hamlet, Carl D. Crane Department of Mechanical and Aerospace Engineering, University of Florida

    Gainesville, Florida, USA

  • then tested in realistic simulations involving two mobile

    robots in an indoor environment. The observations for the

    models are made by the robot using a common laser range

    finder. The remainder of this paper is laid out as follows:

    first, the background and theory of the HMM behavior

    models is explained. Next, the method of behavior

    prediction is introduced, followed by the experimental set-

    up and simulation description. Finally, the results of the

    simulations are presented and discussed.

    BEHAVIOR MODELING In this paper, behaviors are defined as a temporal

    series of state changes of a dynamic agent. The states of the

    dynamic agent are assumed to be not directly observable,

    but having observations that are correlated to the agents state in a probabilistic manner. This is a reasonable

    assumption because when the higher level intent or goal of

    a dynamic agent is to be inferred, the relevant states would

    be the agents internal or mental states. These states are not directly observable but will result in measurable physical

    phenomena that are correlated to the agents state. The behavior dictates how the agent changes from one state to

    another. When behaviors are defined in this way, hidden

    Markov models are ideal representations for them.

    There are several elements used to describe a HMM.

    The number of states, N, is the number of distinct, finite

    states in the model. Individual states are represented as

    = {1, 2, . . , }

    The current state is . Although the observations in hidden Markov models can be continuous random variables, in this

    paper only behavior models that have a finite number of

    discrete observation symbols are considered. Therefore, M

    is the number of possible observation tokens in the model

    where the individual observation tokens are denoted as

    = {1, 2, . , }.

    The state transition matrix

    = {}

    is the matrix of probabilities of switching from one state to another, i.e.

    = (+1 = | = ).

    It should be noted that HMMs assume the system follows

    the Markov, or memoryless property. The assumption of the

    Markov property is that the probability of the next state of

    the system depends only on the current state of the system

    and is independent of the previous system states, i.e.

    ( = |1, 2 1) = ( = |1)

    The observation symbol probability distribution for

    state j is

    () = ( | = )

    1 1

    In other words, the probability of seeing observation

    symbol at time t, given the state at time t is . In the discrete model, this yields an matrix of observation probabilities.

    The final element of HMMs is the initial state

    distribution,

    = (1 = ) 1 .

    The entire model can be specified by the three probability

    measures A, B, and so the concise notation for the model

    = (, , )

    is used. The goal is to determine if a series of observations

    = 12

    was generated by a behavior modeled as = (A,B,), where each Ot is an observation symbol in V. Throughout this

    paper the agent is the dynamic object that is being observed and whose behavior is to be determined. The

    agent could be a human, robot, or moving obstacle. The

    robot is the observer trying to determine the agents intent.

    In typical applications, observation symbols for HMMs

    are measured at regular time intervals. To take advantage of

    the discrete nature of the models used here and reduce the

    length of the observation sequences (and therefore the

    required computation), observations symbol generation is

    event-based. Only when an event triggers the value of an

    observation to change is a symbol generated. The subscript

    t is used to denote the sequential time step, but the temporal

    spacing between observations can be highly irregular.

    BEHAVIOR PREDICTION Using the form of the behavior model described above,

    models can be trained to match observations generated

    from a behavior of interest. While there is no analytical

    solution for optimizing the model parameters, = (A,B,), to maximize P(O|), iterative techniques can be used to find a

    local maxima. A modified version of Expectation-

    Maximization known as the Baum Welch algorithm is a

    commonly used method of doing this and is the method

    used in this study. Since the solution is only a local

    maximum, care must be taken in selecting initial

    conditions. Once the model is trained on a series of

    observation sequences, the model can be used to calculate

    the likelihood of a new sequence being generated by the

    model. This is where the strength of HMMs lie, since by

  • using dynamic programming techniques this can be

    calculated in just O(N2T) calculations. To do so, a variable

    known as the forward variable is defined:

    () = (12 , = |)

    where () is the joint probability of the observation sequence up to time t and the state being at time t, given the model, . The probability of the entire sequence given

    the model, (|), can be found inductively in three steps as follows

    1) 1() = (1), 1

    2) +1() = [()

    =1

    ] (+1),

    1 1 1

    3) (|) = ()

    =1

    Here, the forward variables are first initialized as the joint

    probabilities of the state and the observation 1. To calculate the next forward variable, the previous forward

    variables are multiplied by the transition probabilities and

    summed up (to find the probability of transitioning to state

    ) before being multiplied by the next observation probability. Finally, by summing up the forward variables

    over all the states at the final time step, the state variable is

    marginalized out and the probability of the entire

    observation sequence is obtained. In the same manner, by

    summing up the forward variables at the current time step,

    the probability of the current partial observation sequence

    can be found.

    If the goal is to discriminate between a finite set of known independent behaviors, modeled as , 1 , then the probability can be normalized as

    P(|) = ()

    =1

    ()=1

    =1

    so that (|) = 1=1 and the most likely behavior is

    simply

    argmax1

    (|).

    This same process can also be done online on partial

    observation sequences so that the most likely occurring

    behavior can be updated after every time step.

    While the above procedure is optimal when models for

    all possibly occurring behaviors are available (and are

    independent of one another), this is rarely the case. Also, it

    would be beneficial to be able to add or remove behaviors

    without modifying software and to be able to determine if

    none, or multiple, of the modeled behaviors are occurring.

    Therefore, a novel method of obtaining the likelihood of

    behaviors is introduced.

    In itself, the probability of a partial observation

    sequence for a given model does not convey very much

    information. This is because, depending on the number of

    possible observation sequences, even the most likely

    sequence of observations can have a very low probability.

    Therefore, in order to determine the likelihood of a

    behavior model given a partial observation sequence, the

    probability of the current observation sequence is

    normalized with respect to the probability of the most

    likely observation sequence of equal length. Then, that

    likelihood is scaled by the predicted fraction of the

    behavior that has been observed.

    L(|12 ) = ()

    =1

    max

    P(1 |)

    Behaviors are not assumed to be of equal length;

    therefore, the length of an observation sequence

    corresponding to behavior i is denoted as . For each behavior, the normalizers were pre-computed as

    max (1 |) , 1 .

    Figure 1. Aerial view of the simulated office environment.

  • EXPERIMENTAL SET UP The behavior models described above were tested in

    realistic simulations using the open source software ROS

    (Robot Operating System) and Gazebo, an open source

    multi-robot simulator. The simulations were conducted

    using two Turtlebots in a simulated office environment,

    shown in Figure 1 with the observing robot toward the

    center of the figure and the robot representing the dynamic

    agent toward the top of the figure. The following sections

    describe the simulation environment and the software

    implemented in order to test the behavior models presented

    in the previous section.

    Simulation Environment

    The software used for the experimental simulations,

    Gazebo, accurately simulates rigid-body physics in a 3D

    environment. It also generates realistic sensor feedback and

    interactions between three-dimensional objects. An aerial

    view of the office environment used for the simulations is

    shown in Figure 1 above. The observations for the behavior

    models were generated using the robots planar laser range finder. A visualization of the laser range finders output is shown in Figure 2. The robot is facing the direction

    corresponding to up in Figures 1 and 2, and the dynamic

    agent as seen by the robot can be seen in Figure 2. During

    the simulations, the sensor data generated by the simulator

    was sent to ROS for processing and running the behavior

    recognition algorithms, all in real-time. Both Turtlebots

    were equipped with an inertial measurement unit and

    encoders for measuring wheel odometry. All sensor data

    (laser, IMU, and wheel encoders) had added Gaussian noise

    to mimic real-world uncertainty.

    ROS Software

    While some software for operating Turtlebots is

    available open source, all the software for the behavior

    recognition algorithms was developed, along with the

    necessary peripheral programs, in C++. The observing

    robot tracked the dynamic agent by looking for the known

    robot geometry in each laser range scan. The partial view of

    the circular robot in the office environment can be seen in

    the visualization of the laser range scan in Figure 2. The

    maximum range of the laser range finder was 10 meters and

    the tracker could identify the robot at a range of up to 7.5

    meters. If the robot was identified in a scan, the robot's

    absolute x-y coordinates were calculated and used as input

    to a Kalman filter which estimated the velocity. Using the

    position and velocity information as input to the behavior

    models, the likelihood of each models occurrence was

    calculated. The dynamic agent executed behaviors using a

    PID controller using odometry data as feedback. The

    odometry data was obtained via an Extended Kalman Filter

    that fused wheel encoder data with IMU data. A program

    for generating observations read in the position and

    velocity of the dynamic agent and published observations

    when events triggering them occurred. Then behavior

    recognizers for each of the modeled behaviors determined

    the likelihood of their respective behavior and output the

    data in real time.

    RESULTS To test the behavior models, six behaviors based on

    polygonal trajectories were used. The observations used in

    the models for these trajectories consisted of the change in

    velocity direction of the dynamic agent. Examples of the

    trajectories are shown in Figure 3. Going clockwise from

    Figure 2. Visualization of a laser scan of the environment containing a dynamic agent.

    Figure 3. Example trajectories from the six tested behaviors.

  • top-left in Figure 3, the behaviors are referred to as

    rectangle, triangle, convex box, concave box, trapezoid,

    and hourglass.

    The tests performed consisted of having the dynamic

    agent execute each of the six behaviors 10 times while

    behavior recognizers for all six of the models were running.

    Each one of the behaviors was randomized in three ways.

    First, the dynamic agent executed the trajectories in

    different orientations every time by randomly selecting an

    initial direction. Second, the side lengths of the polygonal

    trajectories were varied by randomly selecting a scaling

    factor between 0.5 and 1.5. Finally, the trajectories were

    executed in both clockwise and counter-clockwise

    directions. In addition, the dynamic agent executed the

    behaviors based on a PID controller receiving feedback

    from noisy sensors, increasing the variance in trajectories

    for the same behavior.

    The average outputs of the behavior recognizers for

    each of the six behaviors are graphed in Figures 4-9 as

    functions of the percent of the behavior executed. The

    percent of a behavior executed was determined by average

    distance traveled at the time of the output divided by the

    total trajectory length. For clarity, only the data for

    behavior models having non-zero likelihood are shown. In

    half the cases, the true behavior has the highest likelihood

    starting from the first observation. In every case, the true

    behavior has the highest likelihood by the time 40% of the

    behavior has been executed.

    Figure 4. Experimental results for the trapezoid behavior.

    Figure 5. Experimental results for the rectangle behavior.

    .

    Figure 6. Experimental results for the triangle behavior.

  • CONCLUSIONS AND FUTURE WORK This paper presented a method of autonomous

    behavior prediction based on hidden Markov models. A

    procedure for performing the online classification problem

    of selecting the most likely behavior from a set of mutually

    exclusive behaviors was introduced. Next, the procedure

    was extended in order to determine the individual

    likelihood of behaviors for the problem of having an

    incomplete set of potentially dependent behaviors. The

    method uses event-based observation models and measures

    the similarity between the current observations and the

    ideal observations for a given behavior. The models were tested in simulation using two robots in a static

    environment. The results showed very good prediction of

    the simple behaviors tested. All models output the highest

    likelihood for the true behavior by the time 40% of the

    behavior was executed, and in five out of 6 behaviors,

    before 25% of the behavior was executed.

    In future work, more complex behaviors with realistic

    applications will be modeled. The models will be expanded

    to include continuous observation densities, and the models

    will be tested on physical hardware as well as in

    simulation.

    REFERENCES [1] Baum, L. E. (1972). An equality and associated

    maximization technique in statistical estimation for

    probabilistic functions of Markov processes.

    Figure 7. Experimental results for the hourglass behavior.

    Figure 8. Experimental results for the convex box behavior.

    Figure 9. Experimental results for the concave box behavior.

  • Inequalities, 3, 1-8.

    [2] Baum, L. E., Petrie, T., Soules, G., & Weiss, N. (1970).

    A maximization technique occurring in the statistical

    analysis of probabilistic functions of Markov chains. The

    annals of mathematical statistics, 164-171.

    [3] Bakis, R. (1976). Continuous speech recognition via

    centisecond acoustic states. The Journal of the Acoustical

    Society of America, 59(S1), S97-S97.

    [4] Salvucci, Dario D. "Inferring driver intent: A case study

    in lane-change detection." Proceedings of the Human

    Factors and Ergonomics Society Annual Meeting. Vol. 48.

    No. 19. SAGE Publications, 2004.

    [5] Pentland, A., & Liu, A. (1999). Modeling and prediction

    of human behavior.Neural computation, 11(1), 229-242.

    [6] Han, Kwun, and Manuela Veloso. "Automated robot

    behavior recognition." ROBOTICS RESEARCH-

    INTERNATIONAL SYMPOSIUM-. Vol. 9. 2000.

    [7] Bennewitz, Maren, Wolfram Burgard, and Sebastian

    Thrun. "Using EM to learn motion behaviors of persons

    with mobile robots." Intelligent Robots and Systems, 2002.

    IEEE/RSJ International Conference on. Vol. 1. IEEE, 2002.

    [8] Jelinek, F., Bahl, L., & Mercer, R. (1975). Design of a

    linguistic statistical decoder for the recognition of

    continuous speech. Information Theory, IEEE Transactions

    on, 21(3), 250-256.

    [9] Jelinek, F. (1976). Continuous speech recognition by

    statistical methods. Proceedings of the IEEE, 64(4), 532-

    556.

    [10] Kelley, R.; Nicolescu, M.; Tavakkoli, A.; King, C.;

    Bebis, G., "Understanding human intentions via Hidden

    Markov Models in autonomous mobile robots," Human-

    Robot Interaction (HRI), 2008 3rd ACM/IEEE

    International Conference on , vol., no., pp.367,374, 12-15

    March 2008

    [11] Lattner, Andreas D., et al. "Sequential pattern mining

    for situation and behavior prediction in simulated robotic

    soccer." RoboCup 2005: Robot Soccer World Cup IX.

    Springer Berlin Heidelberg, 2006. 118-129.

    [12] Thrun, S., Burgard, W., & Fox, D.

    (2005). Probabilistic robotics (Vol. 1). Cambridge, MA:

    MIT press.

    [13] Rabiner, Lawrence R. A tutorial on hidden Markov models and selected applications in speech recognition. Proceedings of the IEEE. 1989, 257286. [14] Fox, M., Ghallab, M., Infantes, G., & Long, D. (2006).

    Robot introspection through learned hidden markov

    models. Artificial Intelligence, 170(2), 59-113.

    [15] Liu, Yanming, and Karlheinz Hohm. "Discrete hidden

    Markov model based learning controller for robotic

    disassembly." Proceedings of the international ICSC/IFAC

    symposium on neural computation (NC98), Vienna. 1998. [16] Yang J and et al. Hidden markov model approach to

    skill learning and its applications to telerobotics. IEEE

    Transactions on Robotics and Automation, 10(5):621631, October 1994.

  • ASME District F - Early Career Technical Conference Proceedings ASME District F - Early Career Technical Conference, ASME District F ECTC 2013

    November 2 3, 2013 - Birmingham, Alabama USA

    NEAR REAL-TIME TELEOPERATION CONTROL OF THE STUBLI TX40 ROBOT

    Tom Trask University of North Florida Jacksonville, Florida USA

    Arnold Cassell Seagate

    Minneapolis, Minnesota USA

    Dang Huynh University of North Florida Jacksonville, Florida USA

    Daniel Cox University of North Florida Jacksonville, Florida USA

    ABSTRACT The Stubli TX40 robot is an industrial robot with motion

    control coordinated and accomplished through a programming console and a pendant interfaced to the robot. Neither of these interfaces is suitable for teleoperation control, which is a transparent intuitive human user interface to seamlessly control the motion of the robots end effector. User control of the robot motion is enabled through a custom-developed, network-based communications system architecture. The system architecture is demonstrated through implementation of two commercial off-the-shelf user input devices used for teleoperation.

    The custom-developed architecture binds the movements actuated from the user to the robot. The architecture allowing the user to control the data speed and motion sensitivity for the six motion axes is contained in the implementation of the architecture. Commands of the architecture are sent via the internet to the robot controller, and read through the communications features of the robot. The commands are developed to provide the motion control including input filtering, communications protocol, and motion calculations for the coordinated six degree-of-freedom teleoperated motion control of the robot. The telerobotic motion commands are decoded and translated into motion commands customizing the software of the commercial industrial robot to accomplish user-transparent spatial motion control.

    INTRODUCTION Teleoperation involves the physical operation of machinery

    using a remotely controlled interface. Telerobotics involves the teleoperation of robot manipulators. Teleoperation is important for multiple reasons, and has received much attention. Early attention has been motivated by space exploration. In [1], salient problems in the supervisory control using telerobotics are outlined, while hardware implementations are addressed in [2,3]. A telerobotic system may also have force reflection in which the operator kinesthetically feels the force exerted at the end effector of the robotic manipulator [4,5,6]. Stability in

    band-limited behavior is described in [7]. A well known contributing stability issue for telerobotics is time delay [5,8,9] while this problem is pervasive due to communication time lag [9]. Mobile robots present another application domain for telerobotics [10] as well as telerobotics in ocean environments [7]. Strategies to accomplish force reflection with time delay consideration and issues are also studied [6,11,12].

    Applications in telerobotics continues, as the maturation of robotic technology allows for systems to be applied to hazardous environments and material handling [13,14], nuclear disasters using the internet [15], and evolving space systems [16], to be operated remotely and efficiently. Teleoperation is also being pursued in the health industry in an effort to increase the mobility of handicapped patients using assistive robotics.

    Available teleoperation technologies are not intuitively easy to use; a goal of this research is to develop an easier-to-use interface architecture for robotic teleoperation using simple available off-the-shelf input devices. This paper describes a teleoperation interface architecture making use of off-the-shelf user input devices and integrated with an industrial robot. Development of the architecture to capture user input and the introduction of the custom-developed graphical user interface is described in the next section. Section 3 describe