Post on 02-Jun-2018
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
1/14
A Systematic Method of Designing Control
Systems for Service and Field Robots
Cezary Zielinski, Tomasz Kornuta and Tomasz WiniarskiInstitute of Control and Computation Engineering
Warsaw University of Technology
Nowowiejska 15/19, 00-665 Warsaw, Poland
Email: {C.Zielinski, T.Kornuta, T.Winiarski}@ia.pw.edu.pl
Abstract The paper presents a systematic approach to robotcontrol system design. The robot is treated as an embodied agentdecomposed into effectors, receptors, both real and virtual, anda control subsystem. Those entities communicate through com-munication buffers. The activities of those entities are governedby FSMs that invoke behaviours formulated in terms of transition
functions taking as arguments the contents of input buffers andproducing the values inserted into output buffers. The methodis exemplified by applying it to a design of a control systemof a robot capable of locating an open box and covering it witha lid. Other systems that have been designed in a similar way arepresented as well, to demonstrate the versatility of the approach.
I. INTRODUCTION
The majority of robot control systems is designed intu-
itively, without any formal specification, just relying on the
experience of their designers. As currently robot controllers are
mainly composed of software, design approaches originating
with software engineering dominate. Those approaches differin the way the designed systems are decomposed into modules,
how those modules interact and what functions are allotted
to them. As this kind of systems tend to be complex the
resulting architecture or its components can rarely be reused.
This paper focuses on a systematic approach to the design
of robot control systems fostering reuse and thus increasing
the reliability of the resulting system. Reuse is possible if
different systems use common patterns, especially if those
patterns are embedded in the application domain in this
case: robotics. The seminal work on software paradigms [1]
singles out three classes of software patterns: architectural
patterns, design patterns and idioms. Architectural patternsdefine the fundamental structure of the system, i.e. decompose
the system into predefined subsystems, specifying their activ-
ities and provide guidelines for establishing interconnections
between them. Design patterns pertain to the inner structure
of the subsystems and refine the relationships between them.
Idioms are patterns specific to a particular implementation
language, thus they are deeply rooted in software engineering.
It should be noted that [1] pertains to software engineering
and not to robotics directly. Exclusion of domain specific
knowledge from the design process will certainly not lead to
better systems. This paper focuses on architectural and design
patterns taking into account the domain knowledge.
Although a generally accepted formal approach to robot
control system design, rooted in the relevant domain, i.e.
robotics, has as yet not been established, many tools facili-
tating the design process have been produced. Initially robot
programming libraries were created, e.g.: PASRO (Pascal for
Robots) [2], [3] based on Pascal, RCCL (Robot Control CLibrary) [4] utilising C. The latter triggered among others the
development of: ARCL (Advanced Robot Control Library)
[5], RCI (Robot Control Interface) [6] and KALI [7][10].
Subsequently libraries were supplemented by patterns (mainly
idioms) and thus robot programming frameworks emerged,
e.g.: Player [11][13], OROCOS (Open Robot Control Soft-
ware) [14], [15], YARP (Yet Another Robot Platform) [16],
[17], ORCA [18], [19], ROS (Robot Operating System) [20] or
MRROC++ (Multi-Robot Research Oriented Controller based
on C++) [21][25]. Usually robot programming frameworks
try to solve distinct implementation problems. Based on the
problems they focus on, the following exemplary types of
frameworks were identified by [26]:
frameworks for device access and control (sensor and
effector access),
high-level communication frameworks (solve the inter-
component communication problem),
frameworks for functionality packaging (e.g., focus on
computer vision for robotics, but including the interface
to other components of a robot system, so that work on
vision is not hindered by other components of the system,
however enabling testing of the system as a whole),
frameworks for evolving legacy systems (enabling reuse
of well tested components implemented in old technolo-
gies).
The majority of robot programming frameworks utilizes the
software engineering approach to their definition, rather than
the domain (robotics) based one, thus the method of defining
interfaces between components prevails over the methods of
defining robotics based inner workings of those components.
Nevertheless currently frameworks tend to evolve in the di-
rection of tool-chains (e.g. [27], [28]) rather than providing
patterns that are domain specific. It is reasonable to assume
that before such domain specific frameworks can emerge, first
a formalisation for the description of the domain must be
produced. Lack of such a formalism is the reason why the
1978-1-4799-5081-2/14/$31.00 2014 IEEE
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
2/14
development of domain specific frameworks is being hindered.
There has been an ongoing effort to produce a general
architecture of a robot system, e.g. CLARAty (Coupled-Layer
Architecture for Robotic Autonomy) [29] in this case with
the aim to integrate robotic capabilities on different platforms
through generalizing legacy software. Many architectures are
biologically inspired, e.g. [30][32], or stem from specificapplications, e.g. autonomous mobile robots [28]. Although
those architectures dictate specific system structures and sub-
system interactions, they do not provide general mechanisms
for describing the operation of the system components. Some
of those architectures over-constrain the designer requiring
a specific control paradigm, e.g. reactive or deliberative.
A formal approach to the design of hierarchical embed-
ded intelligence system architectures is presented in [33].
A formalization measure is introduced that qualifies how well
the considered formalism (an ADL Architecture Descrip-
tion Language) supports expression, construction and reuse
of intelligent systems. ADLs define patterns for connectingcomponents and describing components. For the purpose of
comparison of different ADLs a measure has been intro-
duced assessing the fulfillment of diverse criteria, however
the presented measures are qualitative rather than quantitative.
The considered criteria are: flexible description, meaningful
description, standardized description, description of interfaces,
decomposition into individuals, description of dependencies,
translation into infrastructure, exploitation of infrastructure
and subsystem separation. Unfortunately this approach again
does not concentrate on the domain that the system represents,
giving no guide to the designer as how to structure the system.
The paper [32] correctly points out that excessive freedom
of choice, of how the system is structured and implemented,means that the considered framework does not capture the
fundamental elements of the application domain and thus the
programmer gets little benefit from it. The knowledge of the
domain is of paramount importance to both the structure and
the functioning of the designed system.
Domain engineering focuses on common factors within
a certain type of systems and their applications, what en-
ables the creation of reusable system subparts and structures,
especially reusable software. This leads to the standardiza-
tion of reference architectures and components. The paper
[34] states that in the case of robotics the goal of defining
a single reference architecture fitting all applications of robotsis elusive, because of the characteristics of robots, stemming
from high variability of those devices: multi-purpose, multi-
form (many structures), multi-function (diverse capabilities).
Five directions of research are postulated: conceptual tools
enabling the identification of requirements of robotic systems,
abstract models coping with hardware and software diversity,
formulation of system development techniques, definition of
a methodology for managing the complexity of deploying
those systems in real world, creation of tools assisting software
engineers in the development of robotic applications. The
paper [35] proposes to single out those system elements
that remain unchanged when the system itself undergoes
modification to produce a general domain specific architec-
ture. It also postulates the separation of robot control from
robot-environment interaction. Although this subdivision is
obviously an important requirement the general approach to
the establishment of an appropriate system structure is quite
tedious in this case. However this approach is facilitated by
providing an Anymorphology design pattern which enablesmodelling of robot mechanisms. The paper [36] presents do-
main specific languages (DSL) at several levels of abstraction,
facilitating programming of robots by employing the task
frame formalism (TFF). In general, internal DSLs employ
existing general purpose languages for the representation of
the concepts of a particular domain, while external ones rely on
languages developed especially for that purpose. In that paper
a high level Ecore metamodel is defined for the representation
of actions employing the TFF. Those actions are subsequently
integrated into an FSM invoking them. This paper postulates
the expression of a metamodel in terms of UML type diagrams
and generation of code on the basis of an instance of sucha metamodel. Work also progresses in the area of software
language engineering, i.e. creation of Domain-Specific Lan-
guages by using metamodels [37]. Software languages, in
contrast to computer or programming languages which target
the creation of code for computers, are languages for modeling
and creating software, thus they encompass a much wider
field of knowledge. Model-driven development focuses on
automatic creation of standard portions of code, thus higher-
level instances of abstract syntax have to be automatically
transformed into lower-level target languages. As the software
applications are getting more complex the languages we use
for their creation must be at an appropriately high level of
abstraction. Ideas can be expressed clearly only if we filterout the unnecessary details. Those can be added at a later
stage, preferably automatically.
The presented approach to the design of robot control
systems focuses on providing a general, yet not constraining,
domain embedded system structure and a method of de-
scribing the operation and interconnection of its components,
supplemented by a universal notation facilitating the system
specification. The level of abstraction is determined by the
system designer, so either a crude representation or a very
detailed one can be produced. The paper first describes the
general method and then provides a simple example, followed
by a list of systems that this approach has been applied to.
II . UNIVERSAL MODEL OF A ROBOTIC SYSTEM
The proposed design method requires from the designer the
specification of a specific model of a robot system executing
the task that it is meant for. This model is produced on the
basis of a universal model of a robotic system described below.
In this approach robots in single- or multi-robot systems are
represented as embodied agents. As embodied agents are the
most general forms of agents, out of them any robot system
can be designed. The thus produced specification is used as
a blueprint for the implementation of the system.
2
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
3/14
A. General inner structure of an Embodied Agent
A robotic system is represented as a set of agents aj ,
j = 1, . . . , na, where na is the number of agents (j des-ignates a particular agent). Embodied agents have physical
bodies interacting with the environment. This paper focuses
on embodied agents [38], but all other agents can be treatedas special cases with no body, thus the presentation is general.
An embodied agent aj , or simply an agent, possesses real
effectors Ej , which exert influence over the environment,
real receptors Rj (exteroceptors), which gather information
from the surroundings, and a control system Cj that governs
the actions of the agent in such a way that its task will be
executed. The exteroceptors of the agent aj are numbered (or
named), hence Rj,l, l = 1, . . . , nR, and so are its effectorsEj,h, h = 1, . . . , nE. Both the receptor readings and theeffector commands undergo transformations into a form that is
convenient from the point of view of the task, hence the virtual
receptors rj
and virtual effectors ej
transform raw sensor
readings and motor commands into abstract concepts required
by the control subsystem to match the task formulation. Thus
the control system Cj is decomposed into: virtual effectors
ej,n, n = 1, . . . , ne, virtual receptors rj,k, k = 1, . . . , nr,and a single control subsystem cj (fig. 1). Virtual receptors
perform sensor reading aggregation, consisting in either the
composition of information obtained from several exterocep-
tors or in the extraction of the required data from one complex
sensor (e.g. camera). Moreover the readings obtained from the
same exteroceptors Rj,l may be processed in different ways,
so many virtual receptors rj,k can be formed. The control loop
is closed through the environment, i.e. exteroceptor readings
Rj,lare aggregated by virtual receptors to be transmitted to thecontrol subsystem cj which generates appropriate commands
for the virtual effectors ej to translate into signals driving the
effectorsEj . This primary loop is supplemented by links going
in the opposite direction. The control subsystem cj can both
reconfigure exteroceptors Rj and influence the method how
the virtual receptors rj aggregate readings, thus a link from
the control subsystem to the receptor emerges. The control
subsystem also acquires proprioceptive data from the effectors.
Moreover, an agent through its control subsystem is able to
establish a two-way communication with other agents aj ,
j =j .
The control subsystem as well as the virtual effectors andreceptors use communication buffers to transmit or receive in-
formation to/from the other components (fig. 1). A systematic
denotation method is used to designate both the components
and their buffers. To make the description of such a system
concise no distinction is being made between the denotation of
a buffer and its state (its content) the context is sufficient. In
the assumed notation a one-letter symbol located in the centre
(i.e. E, R, e, r, c) designates the subsystem. To reference its
buffers or to single out the state of this component at a certain
instant of time extra indices are placed around this central
symbol. The left superscript designates the subsystem to which
the buffer is connected. The right superscript designates the
Fig. 1: Internal structure of an agent aj
time instant at which the state is being considered. The left
subscript tells us whether this is an input (x) or an output (y)
buffer. When the left subscript is missing the internal memory
of the subsystem is referred to. The right subscript may be
complex, with its elements separated by comas. They designate
the particular: agent, its subsystem and buffer element. Buffer
elements can also be designated by placing their names in
square brackets. For instance excij denotes the contents of thecontrol subsystem input buffer of the agent aj acquired from
the virtual effector at instant i. Similarly functions are labeled.
The central symbol for any function is f, the left superscript
designates the owner of the function and the subsystem that
this function produces the result of its computations for, the
right superscript: , , refer to the terminal, initial and error
conditions respectively (each one of them being a predicate).
A missing right superscript denotes a transition function. The
list of right subscripts designates a particular function.
B. General subsystem behaviour
Fig. 2 presents the general work-cycle of any subsystems, where s {c, e, r}, of the agent aj . The functioning ofa subsystem s requires the processing of a transition function
which uses as arguments the data contained in the input
buffers xsj and the internal memorys sj , to produce the output
buffer values ysj and new memory contents ssj . Hence the
subsystem behaviour is described by a transition function sfjdefined as:
ssi+1j , ysi+1j
:= sfj(
ssij , xsij). (1)
where i and i+ 1 are the consecutive discrete time stampsand := is the assignment operator. Function (1) describes theevolution of the state of a subsystem s. A single function (1)
3
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
4/14
Fig. 2: General flow chart of a subsystem behaviour sBj,u,where represents any subsystem including another agent
would be too complex to define it in a monolithic form, thus
it is usually decomposed into a set of partial functions:ssi+1j , ys
i+1j
:= sfj,u(
ssij, xsij ), (2)
where u = 0, . . . , nfs . Capabilities of the agent arise fromthe multiplicity and diversity of the partial functions of its
subsystems. Such a prescription requires rules of switching
between different partial transition functions of a subsystem,
thus three additional Boolean valued functions (predicates) are
required:
sfj,u defining the initial condition,
sfj,u representing the terminal condition and
sfj,u representing the error condition.
The first one selects the transition function for cyclic execu-
tion, while the second determines when this cyclic execution
should terminate. The last one detects that an error has
occurred. Hence a multi-step evolution of the subsystem in
a form of a behaviour Bj,u is defined as:
sBj,u sBj,u
sfj,u,
sfj,u, sfj,u
(3)
The execution pattern of such a behaviour is presented in
fig. 2. The sj , where j {j, j}, denotes all subsystems
associated with sj (in the case of the control subsystem some
of those subsystems even may not belong to the same agent,
hence j appears). The behaviours sBj,u can be associatedwith the nodes of a graph and initial conditions with its arcs,
Fig. 3: State graph of the behaviour selection automaton
thus a finite state automaton representation results (fig. 3).
The set of initial conditions singling out the next behaviour to
be executed can be used to define a state transition table of
the automaton. Behaviour selection represented by a hexagonal
block is executed as a stateless switch defined by the initial
conditions sfj,u.s Bj,0 is the default (idle) behaviour, activated
when no other behaviour can be activated.
III. AN EXAMPLE OF THE APPLICATION OF THE DESIGN
METHOD
The proposed design method is exemplified here by pro-
ducing a control system of a robot capable of covering a box
with a lid a brief and easy to follow example. This task
requires an exteroceptor (e.g. an RGB-D camera) for locating
the box and an impedance controlled effector (manipulator)
to execute the task. However, it should be noted that those
elements constitute only a part of the body of the Velma
robot used in other experiments. The robot is composed of
two KUKA LWR4+ arms equipped with two three-fingered
Barrett grippers mounted on an active torso with an additional2DOF active head (fig. 4).
Fig. 4: Two-handed robot Velma with an active torso and head
The task of putting a lid on the box is realised in the
following way. Assuming that the lid is already grasped and
that the vision system has already approximately localised the
box, the robot using position control (i.e. with high stiffness
of the artificial spring in the impedance control law) carries
the lid to a pose above the box. Then using slow motion it
4
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
5/14
Fig. 5: Inner structure of the agent atr representing the two-handed robot Velma
lowers the lid on the box decreasing the stiffness. Once contact
is attained the manipulator induces small circular motions of
the lid in the plane horizontal to the box top while exerting
a force in the vertical direction. This should ensure that the
lid pops onto the box.
The agent atr, representing the Velma robot, contains three
virtual effectors, forming the abstraction layer facilitating the
control of the Velma body (fig. 5): etr,b controls the whole
body posture, i.e. torso, head, left and right manipulator,
while etr,lg and etr,rg control the postures of the left and right
gripper fingers. The force-torque sensors (mounted in the
joints of the KUKA arms) and tactile sensors (i.e. artificialskin of the Barrett grippers) are treated as proprioceptors,
hence their measurements are processed by respective virtual
effectors and used in motion generation. Additionally Velma
possesses several exteroceptors gathering information about
the state of the environment, e.g. location of the objects. The
majority of those exteroceptors is located in the active head:
two high-resolution digital cameras forming a stereo-pair and
two Kinect sensors mounted vertically, thus providing data
from complementary sectors in front of them. The information
from the stereo-pair is aggregated by the rtr,sp virtual receptor,
whereas rtr,kin processes RGB-D images from from the pair
of Kinect sensors. There are also two virtual receptors rtr,lgc
and rtr,rgc, connected to cameras attached to the left and right
gripper respectively.
The exemplary task is very representative of the operations
a service robot has to execute as it requires the combination of
both visual and force sensing. To simplify the implementation
of the task, an RGB-D camera instead of an RGB camera
was mounted on one of the manipulator wrists. However, it
was assumed that during task execution this manipulator will
be immobile, hence this setup can be classified as SAC
Stand-Alone-Camera [39], with a known pose of the camera
with respect to the robot base reference frame. Thus only one
virtual effector, one virtual receptor and the control subsystem
is presented here. The reduced structure of the agent atris presented in fig. 6. For briefness of the presentation we
describe only few selected behaviours of those subsystems.
A. Real effector: KUKA LWR4+
The task of putting a lid on the box will be executed by
the KUKA LWR4+, which is a serial, lightweight (16kG),
redundant robot arm with seven degrees of freedom. Its charac-
teristics (fig. 17) are similar to those of a human arm in terms
of size, lifting capacity and manipulation capabilities, what
enables it to safely operate in a human oriented environment.
LWR4+ joints are not only equipped with motor encoders,
but also with torque sensors. Motion can be commanded
5
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
6/14
Fig. 6: The structure of the agent atr used in the example
both in the configuration (joint) space and operational space.
Moreover, the desired torques can be set in joints. LWR4+
industrial controller can be connected to external controllers,
particularly for research purposes. Thus the industrial con-
troller is equipped with Fast Research Interface (FRI) [40]
and uses Ethernet and UDP protocols, that enable reading the
robots state and transmission of control commands.
The contents of the input and output buffers of the real
effector Etr,rm result from the requirements of the impedance
control law in a version which includes a desired torque:
+1c = Kd(q
d q
m)+ D
d
q
m +f(q
m,q
m, q
m)+d (4)
where discrete time instants at which the control law (4)
is computed by the real effector ( 0.3ms), discretetime instants at which some of the control law parameters
are modified by the virtual effector ( = 1ms), K
d joint
stiffness vector, D
d joint damping vector, q
d desired
joint position vector, d desired joint torque vector. Here
and in the following the right subscript d stands for the
desired value andmfor the measured value, whilec designates
current or computed value. The virtual effector etr,b controlsthe behaviour of the real effector Etr,rm (i.e. influences its
regulator) by providing through the Eyetr,b buffer to the xEtr,rmbuffer the values of: K
d, D
d, q
d and
d . The real effector
Etr,rm obtains those values from the virtual effector etr,b,
measures the position qm, velocity q
m and processes the
dynamics model f(qm,q
m, q
m) to compute the control law(4). It also provides to etr,b via yEtr,rm and
Exetr,b the current
value of the Jacobian matrix Jm, the current wrist pose in the
form of a homogeneous matrix 0WTm along with the forcesand torques measured in the wrist reference frame WFm.
The control law (4) is utilized by the external control loop
of the real effector Etr,rm. The computed torque +1c is the
regulator output, which is subsequently used as an input to
the internal loop with the torque controller running with the
frequency of tens of kHz. The structure of this regulator as
well as its stability analysis were presented in [41].
B. Virtual effector representing the impedance controlled ma-
nipulator
The operation of the virtual effector relies on several types
of transformations, hence their general form must be explained
first. Pose of the reference frame Q with respect to U can be
expressed as either homogeneous matrix UQTor a column vec-
tor UQr. In the latter case, the position in Cartesian coordinates
is supplemented by a rotation angles around the frame axes
(the aggregated angle-axis representation). The operator Atransforms the above pose representation into a homogeneous
matrix, and A1 defines an inverse transformation:
A(UQr) = U
QT, A1(UQT) =
UQr (5)
The 6 1 column vector Ur = UvT, UTT expressedwith respect to (wrt) U represents the generalized velocity of
a frame U moving relative to frame0 (i.e. motion ofU wrt0expressed in the coordinates ofU treated as static).
Ur= U(0Ur) (6)
It consists of linear velocity v and rotational velocity . The
6 1 column vector UF expresses a generalized force andconsists of a 3 element force vector and a 3 element torque
vector. In this case, force is applied to the origin of the
coordinate frame U, and is expressed wrt the same frame.
TransformationsUQF andUQV transform generalized forces
and velocities respectively, expressed wrtQ into those entitiesexpressed wrt U, when both frames are affixed to the same
rigid body:
UF= UQFQF, Ur= UQV
Q r (7)
In the case when free vectors are used (such as increments
of position, orientation, velocity or force) there exists a need to
express them wrt the frame, with an orientation other than the
one in which they were initially expressed. In this case, one
can use the C(Ur) notation, in which the generalized velocityof frame U wrt 0 is expressed wrt frame C. For that purposethe transformation matrix is used [42]:
C(Ur) = CU Ur (8)
Fig. 7: The transition function e,Eftr,b,1,1 of the virtual effector
controlling the right manipulator using impedance control in
operational space
The example in the following text focuses on a behaviour
realizing extended impedance control in the operational space.
6
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
7/14
This behaviour uses the transition function e,Eftr,b,1, decom-
posed into e,Eftr,b,1,1 and e,Eftr,b,1,2, each producing the value
of a different component in the output buffer Eyetr,b. The func-
tion e,Eftr,b,1,1 computes d in the following way (fig. 7). The
pose error obtained from the control subsystem is transformed
into the angle-axis representation:
Erd,m= A1( ETd,m). (9)
The resulting vector Erd,m can be imagined as a six-dimensional spring stretched between the current and the
desired end-effector pose. On the basis of the measured pose
of the wrist (obtained through Exetr,b) and the known trans-
formation between the wrist and the end-effector (obtained
through cxetr,b) the pose of the end-effector of the manipulator
is determined:0ETm=
0WTm
WET. (10)
The increment between the current pose of the end-effector
and its pose memorised in the previous iteration of the virtualeffector behaviour:
ETm,p= (0ETp)
1 0ETm, (11)
enables the computation of the current velocity of the end-
effector:
Erm,p= A1( ETm,p)
(12)
Next, a certain force is associated with the end-effector
it is called the computed force EFc. It is computed on thebasis of the length of the imaginary spring Erd,m, the velocityof the end-effector Erm,p, the desired effector stiffness Kd,the desired damping Dd and the desired force exerted on theenvironment EFd:
EFc = KdErd,m+ Dd
Erm,p+ EFd. (13)
It is subsequently transformed into the wrist reference frame:
WFc = W
EFEFc. (14)
WEF is derived from
WET. Finally, the current value of the
manipulator jacobian Jm is obtained from the real effectorthrough Exetr,b. This enables the computation of the desired
values of joint torque, which is subsequently sent to the real
effector:
d= (Jm)T WFc. (15)
The partial transition function responsible for computation of
the desired joint torque is defined as:
Ey e
+1tr,b [d] := e,Eftr,b,1,1 eetr,b, cxetr,b, Exetr,b
(Jm)T W
EF
Kd A1
ETd,m
+
+DdA1(( 0ETp)
1 0
WTmWE T)
+ EFd
.
(16)
Thesymbol represents the place-holder within the bufferfor storing the computed value. The joint regulators within
the real effector should function as torque controllers with
compensation of both gravity and partial dynamics of the
Fig. 8: The proprioceptive function e,cftr,b,1 of the virtual
effector
manipulator, so both the joint stiffness Kd and joint damping
Dd vectors should be set to zero. In this case the value of
qd is irrelevant, however it was also set to zero. Thus thepartial transition function e,Eftr,b,1,2 computing the remaining
real effector control parameters is trivial:
E
y e+1tr,b [ Kd, Dd,qd] := e,Eftr,b,1,2() = [0, 0, 0]. (17)
where 0 = [ 0, . . . , 0]T is of an adequate dimension. Addi-tionally, it is necessary to define the transition function of the
virtual effector responsible for storing the current pose (which
in the next step will be treated as the previous one):
ee+1tr,b [0ETm] = e,eftr,b,1( cxetr,b, Exetr,b) := 0WTm WET. (18)
Also a proprioceptive transition function, responsible for con-
veying the state of the manipulator to the control subsystem,
must be defined (fig. 8):
cy e
+1tr,b [
0E
Tm,
E
Fm,
Jm ] :=
e,cftr,b,1(cxe
tr,b,
Exe
tr,b) =
= [0WTm
W
ET, WEF1 W
Fm, Jm ], (19)Hence, the transition function eftr,b,1 of the virtual effector is
composed of:
eftr,b,1
e,eftr,b,1, e,cftr,b,1,
e,Eftr,b,1,1, e,Eftr,b,1,2
. (20)
The behaviour eBtr,b,1 is executed when the input buffer cxetr,b
gets a new command from the control subsystem, i.e.:
eftr,b,1
cxe
tr,b
. (21)
It is a single step behaviour, thus:
eftr,b,1 TRUE (22)
and, moreover, no special error condition is assumed:
eftr,b,1 FALSE. (23)
Hence the eBtr,b,1 behaviour is defined as:
eBtr,b,1 eBtr,b,1
eftr,b,1,
eftr,b,1, eftr,b,1
. (24)
When the virtual effector is not triggered into activity by the
control subsystem, it activates the default behaviour eBtr,b,0responsible for holding the manipulator at stand still (i.e. it
sends the same desired motor pose to the real effector in each
step) as well as for providing the proprioceptive information
to the control subsystem.
7
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
8/14
C. Virtual receptor representing an RGB-D camera
The general method of object recognition utilises 3D object
models. Those models are represented by two types of point
clouds: dense colour cloud (used mainly during model gener-
ation for visualization purposes) and sparse cloud of features.
The current implementation uses the SIFT features (Scale
Invariant Feature Transform) [43], because they are still treated
as the most robust and discriminative. An exemplary model is
presented in fig. 9. Each model of an object is generated sep-
arately on the basis of a set of views taken off-line by looking
at the object from different angles, whereas each of such views
consist of: RGB image, depth map and automatically generated
object mask (a binary image determining pixels belonging to
the object or constituting the background).
(a) (b)
Fig. 9: Exemplary 3-D point-based model: (a) dense RGB
point cloud (b) sparse feature cloud
Fig. 10: Data flow diagram of the receptor-reading
aggregation-function rftr,lkin,1 responsible for point-based ob-
ject recognition
The generation of the hypotheses by the reading aggregation
function rftr,lkin,1 is presented in fig. 10. Initially a feature
cloud is extracted out of the acquired RGB-D image obtained
from the Kinect sensor via the Rxrtr,lkin buffer. Then SIFT
features are extracted from the image IRGBc :
ISIFTc =F EIRGBc
. (25)
Next their coordinates are transformed from the image to the
Cartesian space. This is done using the knowledge of their
distances from the sensor (the depth map IDc ) and known
(a)
(b)
Fig. 11: Example of a 3-D point-based object hypothesis: (a)
the exemplary scene (b) point cloud containing the scene and
the object model along with the found correspondences
intrinsic parameters of the camera P. This operation results inthe sparse cloud of SIFT features with Cartesian coordinates
wrt the camera (Kinect) frame:
CCSIFT
c =I CISIFT
c , ID
c , P . (26)This cloud is subsequently compared with the feature clouds
of all models stored in the virtual receptor memory CSIFT
M .
FLANN [44] is used as a procedure of feature matching:
CCSIFT
c,M =F M
CCSIFT
c , CSIFT
M
. (27)
It is an efficient implementation of the k nearest neighbours
algorithm. Next the resulting set of correspondences CCSIFT
c,M
is clustered:CHc,M=CL
CC
S I F T c,M
. (28)
This operation takes into account: (a) the model to which
the given cluster of correspondences refers, (b) similarity oftransformations between points forming the given cluster and
the corresponding model points, as well as (c) the Cartesian
distance between points belonging to the cluster. The idea is
similar to the clustering method used by the MOPED frame-
work [45], however enhanced with spatial information gained
from the depth map. Finally, out of the set of hypothesesCHc,M the one with the highest matching factor is selectedand the object instance pose wrt camera frame is estimated:
CGTc = P E
CHc,M
. (29)
The resulting pose is subsequently conveyed to the control
subsystem through the cyrtr,lkin buffer. Thus the definition of
8
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
9/14
the transition function is as follows:
cyr
+1tr,lkin[
CGTc] := r,cftr,lkin,1 rrtr,lkin, Rxrtr,lkin
PE
CL
FM
IC
FEIRGBc
, IDc , P
, IS I F T M
.
(30)
The r Btr,lkin,1behaviour is triggered by the presence of RGB-D
image in the input buffer Rxrtr,lkin:
rftr,lkin,1
Rxr
tr,lkin
, (31)
lasts one step, thus:
rftr,lkin,1 TRUE (32)
and no special error condition is assumed:
rftr,lkin,1 FALSE. (33)
The rBtr,lkin,1 behaviour is finally defined as:
r
Btr,lkin,1 r
Btr,lkin,1 r,cf
tr,lkin,1, rf
tr,lkin,1, rf
tr,lkin,1 . (34)When there is no image to process, the virtual receptor enters
the idle state (default behaviour rBtr,lkin,0) and waits until datawill appear in the input buffer from the real receptor.
D. Control subsystem
The control subsystem of the Velma robot exhibits several
behaviours: impedance control of the full body posture, real-
ization of different types of grasps, tracking moving objects by
cameras integrated with its active head using visual servoing,
visually guided motion of its end-effectors. For brevity only
two behaviours are presented: realization of visual servoing
(used when the manipulator holding the lid approaches thebox) and execution of a guarded motion (used for moving the
lid towards the box until contact is detected).
1) Behaviourc Btr,1: As it was mentioned before, the RGB-D camera is mounted on the Velmas second arm. However
this arm is immobile during execution of the described task,
so the resulting visual servo can be classified as PB-EOL-SAC
(position based, not observing the end-effector pose, stand
alone camera). The visual servo does not modify the contents
of the memory. It also does not have to configure its virtual
receptor or contact other agents. As its only purpose is to
control its virtual effector, only the effector control functionc,e
ftr,1 has to be defined. Other transition functions (i.e.
c,c
ftr,1,c,Tftr,1 and c,rftr,1) are not required. As the buffer
ey ctr,b has
3 components, thus c,eftr,1,1 is decomposed into 3 separate
functions: c,eftr,1,1, c,eftr,1,2 and
c,eftr,1,3, each computing the
value of one of those 3 components.
The effector control function c,eftr,1,1(fig. 12) first computes
the pose of the object of interest with respect to the end-
effector reference frame. Thus it acquires the current object
(goal) pose CGTc from the virtual receptor rtr,lkin throughrxctr,lkin. The control subsystemctr holds in its memory
c ctr the
pose 0CT of the immobile camera (SAC) with respect to theglobal frame of reference. The current pose of the effector0ETm is obtained from the
exctr,b input buffer. Those three
Fig. 12: Data flow diagram of the function c,eftr,1,1 involved
in the execution of the PB-EOL-SAC visual servo
elements are required for the computation of the pose of the
object with respect to the endeffector reference frame:
EGTc =
0ETm
1 0CT
CGTc. (35)
The desired displacement (offset) EGTd between the object andthe endeffector is stored in memory cctr it represents the
displacement of the gripper holding the lid wrt the box, so thelid will approximately fit the box (by executing a straight-line
motion along the Z axis of the gripper). Taking into account
the current and the desired displacement of the object and the
endeffector the error is computed:
ETd,c= E
GTc
EGTd
1. (36)
This displacement may be too large to be executed in one
control step, hence a realizable increment is computed:
ETd,m= RC(ETd,c) (37)
The regulator RC
executes the Cartesian space regulation.
It transforms the homogeneous matrix pose representationETd,c into a vector V = [X, Y , Z, x , y , z , ], where [X , Y , Z ]represent the coordinates of Cartesian position, whereas the
angle and axis [x , y , z] describe the orientation. The axisversor [x , y , z] is fixed for the given step, hence only theX , Y , Z , undergo regulation. The result is being clipped.
Finally the results are retransformed into homogeneous matrix
representation and subsequently sent to the virtual effector
through eyctr. Thus the definition of c,eftr,1,1 is:
eyc
i+1tr,b [
ETd,c] := c,eftr,1,1(ccitr, excitr,b, rxcitr,lkin) RC
0
E
Tm1 0
C
T C
G
Tc E
G
Td1
,(38)
The supplementary function c,eftr,1,2 computes the remaining
parameters of the impedance-control law of the virtual effector,
i.e. desired value of stiffness Kd, damping Dd and force Fd:
eyc
i+1tr,b [
Kd,Dd, Fd] := c,eftr,1,2(ccitr) [Kd,high,Dd,crit, 0], (39)where Dd,crit is critical damping computed according to [46]and Kd,high represents stiffness of the end-effector motionalong three translational and rotational directions, i.e.:
Kd,high = [Kd[x],high,Kd[y],high,Kd[z],high,
Kd[ax],high,Kd[ay ],high,Kd[az],high]T.
(40)
9
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
10/14
Additionally, the control subsystem sends to the virtual effec-
tor the pose of the end-effector wrt wrist the (which in the
example is constant), so the third partial transition function is:
eyc
i+1tr,b [
WET] := c,eftr,1,3(ccitr) WET. (41)
The behaviour terminates when the desired relative pose of
the end-effector wrt the box is achieved:cftr,1
0ETm
1 0CT
CGTc =
EGTd
. (42)
Assuming that the behaviour does not require any special error
conditions aborting its execution:
cftr,1 FALSE. (43)
Thus the behaviour cBtr,1 is defined as follows:
cBtr,1 cBtr,1(
c,eftr,1,1, c,eftr,1,2,
c,eftr,1,3, cftr,1,
cftr,1). (44)
2) Behaviour cBtr,2: The second behaviour starts when thegripper holds the lid in a pose enabling the robot to put it on
a box This is done by moving the lid along a straight line
defined by the Z axis of the gripper frame (the displacementrepresented as EGTd in the
cBtr,1 behaviour). The end-effectormoves with a constant, small velocity vd[z] along that axis:
ey c
i+1tr,b [
ETd,c] := c,eftr,2,1(ccitr) A1 0, 0, vd[z]i, 0, 0, 0 ,(45)
whereirepresents the duration of a single step of behaviour.The impedance parameters are set to make the manipulator
compliant, hence stiffness is low and damping is set as critical.
As no extra force has to be exerted on the box it is set to zero:
eyc
i+1tr,b [
Kd,Dd, Fd] := c,eftr,2,2(ccitr) [Kd,low,Dd,crit, 0]. (46)The control subsystem has also to send the transformation
between the wrist and end-effector frame:
eyc
i+1tr,b [
WET] := c,eftr,2,3(ccitr) WET. (47)
The forces and torques exerted by the end-effector, obtained
from the virtual effector through the exctr,b are used to detect
contact with the box:
cftr,2 Fm[z] Fm[z],limit
, (48)
where Fm[z],limit is the threshold force. Again it was assumedthat error condition will not occur, hence:
cftr,2 FALSE, (49)
so the definition of behaviour cBtr,2 is:
cBtr,2 cBtr,2(
c,eftr,2,1, c,eftr,2,2,
c,eftr,2,3, cftr,2,
cftr,2). (50)
IV. EXAMPLES OF ROBOT SYSTEMS AND THEIR TASKS
A. Force control
Force sensing (touch) is paramount to the control of the
robotenvironment interaction. The presented specification
method was used to develop several robot systems utilizing
position-force control, i.e. classical force-control benchmarks
such as following of an unknown contour or rotating a crank
[47], copying drawings by a single- [48] and a two-arm robot
[49], as well as a dual-arm robot system acting as a haptic
device [50].
B. Visual servoing
Manipulation is based both on touch and sight, thus visual
servoing has been extensively studied. As a result of work on
classification and taxonomy of structures of visual servos [39]
diverse embodied agents utilizing both immobile and mobile
cameras [38], [51] as well as the methods of their testing and
verification were developed.
C. Robot playing checkers
The presented design method was also used for the develop-
ment of several robotic systems combining force control and
visual servoing. One such system was able to play the game of
checkers with a human opponent [52]. The system consisted of
two agents: an agent controlling a modified IRp-6 manipulator
with a camera integrated with a two-finger gripper (fig. 13)
and an agent using artificial intelligence to deduce the robots
ply. Visual servoing was utilized to approach a pawn while
force control was used to grasp it. Moreover, experiments with
parallel visual force control [53] were also conducted.
Fig. 13: Gripper of the modified IRp-6 manipulator above the
checkers board
D. Rubiks cube puzzle
Another, even more sophisticated system, that had been
developed by the presented design method, was a dual arm-
system solving a Rubiks cube puzzle (fig. 14) [54], [55]. At
first sight it might seem that solving the Rubiks cube puzzle
is an easy task for a robot, but in reality it is not so. The robot
has to acquire a very metamorphic object (a Rubiks cube hastrillions of appearances, there are no assumptions as to the
background or lighting conditions). It requires two hands to
rotate its faces, and moreover, the faces can jam if not aligned
properly (this requires position-force control). The cube is
handed to the robot by a human, thus neither its location is
known a priori, nor the cube is static when being acquired (this
requires visual servoing). Once grasped, the state of the cube
must be identified, and out of that information a plan must be
deduced, i.e. the sequence of rotations of the faces leading to
the state in which each of the faces contains tiles of the same
colour (this requires artificial intelligence search algorithms
for solving the puzzle). It is important that the plan of actions
10
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
11/14
Fig. 14: Manipulation of the Rubiks cube
cannot be produced off-line it must be defined while the task
is being executed, as the initial state of the cube is not knownin advance. Each of the above steps cannot take too long,
thus time-efficient algorithms must be utilized. Above all, all
of those capabilities must be integrated into a single system.
The two-handed system contained: two kinematically modified
IRP-6 robots, special purpose control hardware (which was
subsequently modified to its current version [56]), and shape
grippers [57]. In particular, indirect force control was used
to prevent face jamming during cube manipulation [58], [59].
This work was the primary benchmark for both the robotic
system formal specification method and the MRROC++ robot
programming framework [42].
E. Active vision
The presented design method was also utilized for the de-
velopment of several agents using active vision for purposeful
acquisition of visual information regarding their surround-
ings. In particular a single robot system able to distinguish
between a simple convex object and its flat imitation [60]
was developed (fig. 15). Yet another example of active vision
is a system proactively recognizing operator hand postures
(fig. 16), described in details in [61].
F. Velma opening the doors
One of the tasks that the Velma robot was put to wasopening the doors. This is a capability one would expect of
a service robot. This task can be realized in various ways. If
the connection of the manipulator endeffector to the doors is
rigid, there is no need to estimate the kinematics of the doors
(fig. 17) [62]. However the majority of methods is based on the
door kinematics estimation, because for common grippers and
door handles a stiff connection cannot be guaranteed. Here the
tests were conducted using impedance controlled manipulator
with torque controllers in its joints [46]. The pose of the door
handle was obtained by using cameras mounted in the active
head (fig. 4) [63] to extract the door features (fig. 18) [64].
The BarretHand gripper was used to grasp the handle [65].
Fig. 15: The modified IRp-6 manipulator equipped with a cam-
era observing a blue ball and its flat imitation from different
angles
G. A robot based reconfigurable fixture
Automotive and aircraft industries commonly use com-
ponents made of thin metal or composite sheets in their
products. To machine those deformable parts devices for lo-
cating, constraining, and providing stable support are required,i.e. fixtures. Standard fixtures are large molds reproducing
(a) (b)
(c) (d)
Fig. 16: Stages of the active hand pose recognition: (a-c)
approach the hand-like object, (d) pose identification
11
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
12/14
(a) (b)
Fig. 17: KUKA LWR arm with the characteristic coordinate
frames and transformations between them, while opening
cabinet doors
the shape to be supported. Such fixtures are dedicated to
a workpiece, so summarily they are numerous, expensive
and not reconfigurable. Thus, a system consists of a set of
mobile platforms carrying parallel type manipulators equipped
with deformable heads was built to act as a reconfigurable
fixture [66][69] (fig. 19). The heads support large flexible
workpieces in the vicinity of machining. When the location of
machining changes, the supporting robots translocate them-
selves to this location. The machined workpieces have com-
plex spacial geometry. Drilling requires static placement of
supporting robots, while milling necessitates their continuous
relocation during machining. In the latter case two headsconsecutively relocate and affix themselves to the supported
thin metal sheet, so that one of them supports the workpiece
Fig. 18: Velma robot localizing the door features using its
prototype active head
while the other relocates itself in such a way as to precede
the working tool. Mobile base motion plan and sequences
of manipulator postures are automatically generated off-line
based on the CAD/CAM model of the workpiece [70]. The
control system, taking in the thus generated task plan as
input, was designed by using the presented approach. A multi-
agent system operating in an industrial environment resulted.The supervisory agent employed a Petri net to coordinate the
actions of the embodied agents [71].
H. Box pushing
The presented approach has also been applied to the design
of behavioural controllers for robots executing a joint trans-
portation task [72]. In that example the robots observed the
effects of the activities of other robots on the environment
(stigmergy) [73]. Each mobile robot acted independently,
knowing only the goal that the box should reach. Relying on
the observation of the behaviour of the box the robots reacted
in such a way that the box finally reached the goal. Each robotused 4 concurrently acting partial transition functions. The
results of their computations were composed using weighted
superposition. Each partial transition function used feedback
to correct the behaviour of the box, i.e. inducing a motion in
the box, locating the robot at the box corner, keeping the front
of the robot towards the goal and keeping the front of the box
facing the target.
Fig. 19: Robot based reconfigurable fixture and the CNC
machine
12
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
13/14
V. CONCLUSIONS
The presented method of designing robot control systems
is based on a general architecture (universal model) of an
embodied agent, which is subsequently tailored to the required
hardware of the system and the task it has to execute. The
method requires the definition of the structure of the com-
munication buffers and internal memory of the subsystems of
the agent. Those buffers are used as arguments of transition
functions that define the behaviours of subsystems. All be-
haviours are based on the same pattern presented in fig. 2. The
thus defined behaviours are assembled into the overall task of
a subsystem by defining an adequate FSM. The general task
is realised by the control subsystem. Usually the enumerated
design steps are executed iteratively refining the specification.
Once the specification is detailed enough it is used for the
implementation of the required system. The design method on
the one hand imposes a certain domain specific architectural
structure, but on the other is not constraining with respect to
a control paradigm. Multi-tire architectures can be composedby assembling agents into layers. Specific agents or their
subsystems can be reactive or deliberative as well as hybrid.
The level of abstraction depends on the definition of data
structures contained in the buffers. The paper has presented
both an example of such a specification for a simple robot
system and the set of systems that have been designed using
this approach. The diversity of those systems, both in terms
of the hardware utilised and the executed tasks enables us
to conclude that this design method is general enough to be
applied to any robotic system.
ACKNOWLEDGMENT
This project was financially supported by the National Cen-tre for Research and Development grant no. PBS1/A3/8/2012.
Tomasz Winiarski is supported by EU in the framework of
the European Social Fund through the Warsaw University of
Technology Development Programme.
REFERENCES
[1] S. Kaisler, Software Paradigms. Wiley Interscience, 2005.[2] C. Blume and W. Jakob, PASRO: Pascal for Robots. SpringerVerlag,
1985.[3] , Programming languages for industrial robots, 1986.[4] V. Hayward and R. P. Paul, Robot manipulator control under unix
RCCL: A robot control C library, International Journal of RoboticsResearch, vol. 5, no. 4, pp. 94111, Winter 1986.
[5] P. Corke and R. Kirkham, The ARCL robot programming system,1416 July 1993, pp. 484493.[6] J. Lloyd, M. Parker, and R. McClain, Extending the RCCL Program-
ming Environment to Multiple Robots and Processors, 1988, pp. 465469.
[7] V. Hayward and S. Hayati, Kali: An environment for the programmingand control of cooperative manipulators, in 7th American ControlConference, 1988, pp. 473478.
[8] P. Backes, S. Hayati, V. Hayward, and K. Tso, The kali multiarmrobot programming and control environment, in NASA Conference onSpace Telerobotics, 1989, pp. 897.
[9] V. Hayward, L. Daneshmend, and S. Hayati, An overview of KALI:A system to program and control cooperative manipulators, in Advanced
Robotics, K. Waldron, Ed. Berlin: SpringerVerlag, 1989, pp. 547558.[10] A. Nilakantan and V. Hayward, The Synchronisation of Multiple
Manipulators in Kali, Robotics and Autonomous Systems, vol. 5, no. 4,pp. 345358, 1989.
[11] B. P. Gerkey, R. T. Vaughan, K. Sty, A. Howard, G. S. Sukhatme,and M. J. Mataric, Most Valuable Player: A Robot Device Serverfor Distributed Control, in Proceedings of the IEEE/RSJ InternationalConference on Intelligent Robots and Systems (IROS), 2001, pp. 12261231.
[12] T. Collett, B. MacDonald, and B. Gerkey, Player 2.0: Toward a practicalrobot programming framework, in Proceedings of the AustralasianConference on Robotics and Automation (ACRA), December 2005.
[Online]. Available: http://www.ai.sri.com/ gerkey/papers/acra2005.pdf[13] R. T. Vaughan and B. P. Gerkey, Reusable robot software and
the Player/Stage project, in Software Engineering for ExperimentalRobotics, ser. Springer Tracts in Advanced Robotics, D. Brugali, Ed.Springer, 2007, vol. 30, pp. 267289.
[14] H. Bruyninckx, Open robot control software: the orocos project, inInternational Conference on Robotics and Automation (ICRA), vol. 3.IEEE, 2001, pp. 25232528.
[15] , The real-time motion control core of the OROCOS project,in Proceedings of the IEEE International Conference on Robotics and
Automation. IEEE, September 2003, pp. 27662771.
[16] P. Fitzpatrick, G. Metta, and L. Natale, Towards long-lived robot genes,Robotics and Autonomous Systems, vol. 56, no. 1, pp. 2945, 2008.
[17] G. Metta, P. Fitzpatrick, and L. Natale, YARP: Yet Another RobotPlatform, International Journal on Advanced Robotics Systems, vol. 3,no. 1, pp. 4348, 2006.
[18] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Orebck,Orca: A component model and repository, in Software Engineeringfor Experimental Robotics, ser. Springer Tracts in Advanced Robotics,D. Brugali, Ed. Springer, 2007, vol. 30, pp. 231251.
[19] A. Brooks, T. Kaupp, A. Makarenko, S. Williams, and A. Orebck,Towards component-based robotics, in IEEE/RSJ International Con-
ference on Intelligent Robots and Systems (IROS05), August 2005, pp.163168.
[20] M. Quigley, B. Gerkey, K. Conley, J. Faust, T. Foote, J. Leibs, E. Berger,R. Wheeler, and A. Ng, ROS: an open-source Robot Operating System,in Proceedings of the Open-Source Software workshop at the Interna-tional Conference on Robotics and Automation (ICRA), 2009.
[21] C. Zielinski, The MRROC++ system, in Proceedings of the FirstWorkshop on Robot Motion and Control, RoMoCo99, June 1999, pp.147152.
[22] C. Zielinski, W. Szynkiewicz, K. Mianowski, and K. Nazarczuk,Mechatronic design of open-structure multi-robot controllers, Mecha-
tronics, vol. 11, no. 8, pp. 9871000, November 2001.[23] C. Zielinski, Formal approach to the design of robot programming
frameworks: the behavioural control case, Bulletin of the PolishAcademy of Sciences Technical Sciences, vol. 53, no. 1, pp. 5767,March 2005.
[24] , Systematic approach to the design of robot programming frame-works, in Proceedings of the 11th IEEE International Conference on
Methods and Models in Automation and Robotics ( on CD). TechnicalUniversity of Szczecin, 29 August 1 September 2005, pp. 639646.
[25] , Transition-function based approach to structuring robot controlsoftware, in Robot Motion and Control, ser. Lecture Notes in Controland Information Sciences, K. Kozowski, Ed. Springer-Verlag, 2006,vol. 335, pp. 265286.
[26] D. Brugali, G. S. Broten, A. Cisternino, D. Colombo, J. Fritsch,B. Gerkey, G. Kraetzschmar, R. Vaughan, and H. Utz, Trends inrobotic software frameworks, inSoftware Engineering for Experimental
Robotics, D. Brugali, Ed. Springer-Verlag, 2007, pp. 259266.[27] J. Fritsch and S. Wrede, An integration framework for developinginteractive robots, in Software Engineering for Experimental Robotics,D. Brugali, Ed. SpringerVerlag, 2007, pp. 291305.
[28] A. Bonarini, M. Matteucci, and M. Restelli, Mrt: Robotics off-the-shelf with the modular robotic toolkit, in Software Engineering for
Experimental Robotics, D. Brugali, Ed. SpringerVerlag, 2007, pp.345364.
[29] I. Nesnas, The CLARAty project: Coping with hardware and soft-ware heterogenity, in Software Engineering for Experimental Robotics,D. Brugali, Ed. SpringerVerlag, 2007, pp. 930.
[30] R. A. Brooks, A robust layered control system for a mobile robot,IEEE Journal of Robotics and Automation, vol. 2, no. 1, pp. 1423,1986.
[31] R. C. Arkin,Behavior-Based Robotics. MIT Press, 1998.
[32] A. Cisternino, D. Colombo, V. Ambriola, and M. Combetto, Increasingdecoupling in the robotics4.net framework, inSoftware Engineering for
13
8/10/2019 A Systematic Method of Designing Control Systems for Service and Field Robots
14/14
Experimental Robotics, D. Brugali, Ed. SpringerVerlag, 2007, pp.307324.
[33] B. Dittes and C. Goerick, A language for formal design of embed-ded intelligence research systems, Robotics and Autonomous Systems,vol. 59, no. 34, pp. 181193, MarchApril 2011.
[34] D. Brugali, A. Agah, B. MacDonald, I. Nesnas, and W. Smart, Trendsin robot software domain engineering, in Software Engineering for
Experimental Robotics, D. Brugali, Ed. SpringerVerlag, 2007, pp.
38.[35] D. Brugali, Stable analysis patterns for robot mobility, in Software
Engineering for Experimental Robotics, D. Brugali, Ed. SpringerVerlag, 2007, pp. 930.
[36] M. Klotzbcher, R. Smits, H. Bruyninckx, and j. De Schutter, Reusablehybrid force-velocity controlled motion specifications with executabledomain specific languages, in 2011 IEEE/RSJ International Confer-ence on Intelligent Robots and Systems, September 25-30, 2011, SanFrancisco, USA, 2011, pp. 56844689.
[37] A. Kleppe, Software Language Engineering: Creating Domain-SpecificLanguages Using Metamodels. Addison-Wesley, 2009.
[38] T. Kornuta and C. Zielinski, Robot control system design exemplifiedby multi-camera visual servoing, Journal of Intelligent & RoboticSystems, pp. 125, 2013.
[39] M. Staniak and C. Zielinski, Structures of visual servos, Robotics andAutonomous Systems, vol. 58, no. 8, pp. 940954, 2010.
[40] G. Schreiber, A. Stemmer, and R. Bischoff, The fast research interfacefor the kuka lightweight robot, in IEEE ICRA Workshop on InnovativeRobot Control Architectures for Demanding (Research) ApplicationsHow to Modify and Enhance Commercial Controllers. Anchorage, 2010.
[41] A. Albu-Schffer, C. Ott, and G. Hirzinger, A unified passivity-basedcontrol framework for position, torque and impedance control of flexible
joint robots, The International Journal of Robotics Research, vol. 26,no. 1, pp. 2339, 2007.
[42] C. Zielinski and T. Winiarski, Motion generation in the MRROC++robot programming framework, International Journal of Robotics Re-search, vol. 29, no. 4, pp. 386413, 2010.
[43] D. G. Lowe, Distinctive image features from scale-invariant keypoints,International journal of computer vision, vol. 60, no. 2, pp. 91110,2004.
[44] M. Muja and D. G. Lowe, Fast approximate nearest neighbors withautomatic algorithm configuration. in VISAPP (1), 2009, pp. 331340.
[45] A. Collet, M. Martinez, and S. S. Srinivasa, The MOPED framework:
Object Recognition and Pose Estimation for Manipulation, The Inter-national Journal of Robotics Research, vol. 30, no. 10, pp. 12841306,2011.
[46] T. Winiarski and K. Banachowicz, Opening a door with a redundantimpedance controlled robot, Robot Motion & Control (RoMoCo), 9thWorkshop on, pp. 221226, 2013.
[47] C. Zielinski, W. Szynkiewicz, and T. Winiarski, Applications of MR-ROC++ robot programming framework, in Proceedings of the 5th
International Workshop on Robot Motion and Control, RoMoCo05,Dymaczewo, Poland, K. Kozowski, Ed., June, 2325 2005, pp. 251257.
[48] T. Winiarski and C. Zielinski, Implementation of positionforce controlin MRROC++, in Proceedings of the 5th International Workshop on
Robot Motion and Control, RoMoCo05, Dymaczewo, Poland, June, 2325 2005, pp. 259264.
[49] C. Zielinski and T. Winiarski, General specification of multi-robot
control system structures, Bulletin of the Polish Academy of Sciences Technical Sciences, vol. 58, no. 1, pp. 1528, 2010.
[50] T. Winiarski and C. Zielinski, Specification of multi-robot controllerson an example of a haptic device, inRobot Motion and Control (LNCiS)
Lecture Notes in Control & Information Sciences, K. Kozowski, Ed.,vol. 396. Springer Verlag London Limited, 2009, pp. 227242.
[51] C. Zielinski, T. Kornuta, and M. Boryn, Specification of roboticsystems on an example of visual servoing, in 10th International IFACSymposium on Robot Control (SYROCO 2012), vol. 10, no. 1, 2012, pp.4550.
[52] T. Kornuta, T. Bem, and T. Winiarski, Utilization of the FraDIA fordevelopment of robotic vision subsystems on the example of checkersplaying robot, Machine GRAPHICS & VISION, vol. 4, pp. 495520,2011.
[53] M. Staniak, T. Winiarski, and C. Zielinski, Parallel visual-force con-trol, in Proceedings of the IEEE/RSJ International Conference on
Intelligent Robots and Systems, IROS 08, 2008.
[54] C. Zielinski, W. Szynkiewicz, T. Winiarski, M. Staniak, W. Czajewski,and T. Kornuta, Rubiks cube as a benchmark validating MRROC++as an implementation tool for service robot control systems, Industrial
Robot: An International Journal, vol. 34, no. 5, pp. 368375, 2007.[55] W. Szynkiewicz, C. Zielinski, W. Czajewski, and T. Winiarski, Control
Architecture for Sensor-Based Two-Handed Manipulation, in CISMCourses and Lectures 16th CISMIFToMM Symposium on Robot
Design, Dynamics and Control, RoManSy06, June 2024, T. Zielinska
and C. Zielinski, Eds., no. 487. Wien, New York: Springer, 2006, pp.237244.
[56] M. Walecki, K. Banachowicz, and T. Winiarski, Research orientedmotor controllers for robotic applications, in Robot Motion and Con-trol 2011 (LNCiS) Lecture Notes in Control & Information Sciences,K. Kozowski, Ed., vol. 422. Springer Verlag London Limited, 2012,pp. 193203.
[57] C. Zielinski, T. Winiarski, K. Mianowski, A. Rydzewski, andW. Szynkiewicz, End-effector sensors role in service robots, in Robot
Motion and Control 2007 (LNCiS) Lecture Notes in Control & Infor-mation Sciences, K. Kozowski, Ed. Springer Verlag London Limited,June, 1113 2007, pp. 401413.
[58] T. Winiarski and A. Wozniak, Indirect force control developmentprocedure, Robotica, vol. 31, pp. 465478, 4 2013.
[59] T. Winiarski and M. Walecki, Motor cascade position controllersfor service oriented manipulators, in Recent Advances in Automa-tion, Robotics and Measuring Techniques, ser. Advances in Intelli-gent Systems and Computing (AISC), R. Szewczyk, C. Zielinski, andM. Kaliczynska, Eds. Springer, 2014, pp. 533542.
[60] T. Kornuta and ukasz Zmuda, Specification of the structure andbehaviours of a robotic system able to determine object convexity,in 18th IEEE International Conference on Methods and Models in
Automation and Robotics, MMAR2013. IEEE, 2013, pp. 350355.[61] T. Kornuta and C. Zielinski, Behavior-based control system of a robot
actively recognizing hand postures, in 15th IEEE International Confer-ence on Advanced Robotics, ICAR, June 2011, pp. 265270.
[62] T. Winiarski, K. Banachowicz, and M. Stefanczyk, Safe strategy of dooropening with impedance controlled manipulator, Journal of Automation
Mobile Robotics and Intelligent Systems, vol. 7, no. 4, pp. 2126, 2013.[63] M. Walecki, M. Stefanczyk, and T. Kornuta, Control system of the
active head of a service robot exemplified on visual servoing, in RobotMotion and Control (RoMoCo), 9th Workshop on, 2013, pp. 4853.
[64] M. Stefanczyk and M. Walecki, Localization of essential door features
for mobile manipulation, in Recent Advances in Automation, Roboticsand Measuring Techniques, ser. Advances in Intelligent Systems andComputing (AISC). Springer, 2014, pp. 487496.
[65] T. Winiarski, K. Banachowicz, and D. Seredynski, Multi-sensoryfeedback control in door approaching and opening, Inteligent Systems(accepted), 2014.
[66] L. Xiong, R. Molfino, and M. Zoppi, Fixture layout optimization forflexible aerospace parts based on self-reconfigurable swarm intelligentfixture system,International Journal of Advanced Manufacturing Tech-nology, pp. 13051313, 2012.
[67] L. de Leonardo, M. Zoppi, X. Li, D. Zlatanov, and R. Molfino, Swar-mitfix: A multi-robot-based reconfigurable fixture,Industrial Robot, pp.320-328, 2013.
[68] C. Zielinski, W. Kasprzak, T. Kornuta, W. Szynkiewicz, P. Trojanek,M. Walecki, T. Winiarski, and T. Zielinska, Control and programmingof a multi-robot-based reconfigurable fixture, Industrial Robot: An
International Journal, vol. 40, no. 4, pp. 329336, 2013.[69] C. Zielinski, T. Kornuta, P. Trojanek, T. Winiarski, and M. Walecki,
Specification of a multi-agent robot-based reconfigurable fixture controlsystem, Robot Motion & Control 2011 (Lecture Notes in Control &
Information Sciences), vol. 422, pp. 171182, 2012.[70] T. Zielinska, W. Kasprzak, W. Szynkiewicz, and C. Zielinski, Path
planning for robotized mobile supports, Journal of Mechanism andMachine Theory, vol. 78, pp. 5164, 2014.
[71] P. Trojanek, T. Kornuta, and C. Zielinski, Design of asynchronouslystimulated robot behaviours, in Robot Motion and Control (RoMoCo),9th Workshop on, K. Kozowski, Ed., 2013, pp. 129134.
[72] C. Zielinski and P. Trojanek, Stigmergic cooperation of autonomousrobots, Journal of Mechanism and Machine Theory , vol. 44, pp. 656670, April 2009.
[73] E. Bonabeau, M. Dorigo, and G. Theraulaz,Swarm Intelligence: FromNatural to Artificial Systems. New York, Oxford: Oxford UniversityPress, 1999.
14