Post on 18-Jul-2018
Augmented Reality Used for a Remote Robot Control
Radu Cătălin Ţarcă,
UNESCO Chair in Information Technologies
Andrei Ivanescu, Ioana Barda
University Politehnica of Bucharest
Grigore Albeanu
Spiru Haret University,
Ildiko Pasc, , Florin Popentiu Vlădicescu
University of Oradea,
UNESCO Chair in Information Technologies
ABSTRACT
This paper presents how it can be used an Augmented Reality
Interface to control via Internet a remote robot. The Mitshubishi
Telerobot project demonstrates how much an improved
Augmented Reality interface can increase the performance of a
telerobotic system without having to change any of the
telerobots technical features.
Key words: remote control, Internet, augmented reality
1. INTRODUCTION
Augmented reality (AR) enhances a human’s view of a scene by
superimposing virtual objects on a view of the real world.
Augmented reality allows the user to see the real world, with
virtual objects superimposed upon or composited with the real
world. Therefore, AR supplements reality, rather than
completely replacing it.[1]
In most applications, a scene is captured by a camera and
additional information is displayed by suitably designed, virtual
objects added to the real scene view, giving the user a better
perception of the world state .
Azuma (1997) defines Augmented Reality as any system with
the following three characteristics: • Combines real and virtual;
• Is interactive in real time;
• Is registered in three dimensions.
Although AR systems may also augment other human senses,
like the auditory or haptic sense, most current systems only
implement the visual channel.
This paper presents the possibilities of using the augmented
reality to control a robot system via Internet.
In 1998 Harald Friz in his PhD thesis developed an AR tool
used to specify the robot’s end effectors position and
orientation. In October 2003 a research team from Perth
University of Western Australia generates an AR tool (version
1.0) for the UWA Telerobot, which allows operators to model
objects for easier robot manipulations.
Our research team give another solution for the telerobot
control. In the first step we realise the telerobot system, and
then we developed an AR interface that gives the possibilities to
operators to realise the 3D model of any piece from the visual
field, to overlay this model on the real object, and in this way to
obtain the mass centre position and the orientation of the object.
With this information is easy to command robot via Internet to
pick the object and place it anywhere in the robot workspace.
Our AR interface has a new conception, and gives the
possibility to manipulate any kind of object, not only prismatic
one (as in the previous cases).
2. THE TELEROBOT SYSTEM CONCEPT
The concept of “human supervisory control” (Sheridan, 1992)
that underlies a telerobot is illustrated in figure 1. The human
operator interacts with the human-interactive computer (HIC). It
should provide the human with meaningful and immediate
feedback.
The subordinate task-interactive computer (TIC) that
accompanies the controlled robot receives commands, translates
them into executable command sequences, and controls
command execution.
Fig. 1. Basic concept of an Internet telerobot
In a supervisory control system the human supervisor has the
following functions:
• Planning what task to do and how to do it.
• Teaching the computer what was planned.
• Monitoring the automatic action to make sure all is going as
planned and to detect failures.
• Intervening, which means that the operator supplements
ongoing automatic control activities, takes over control
entirely after the desired goal has been reached
satisfactorily, or interrupts the automatic control to teach a
new plan.
• Learning from experience so as to do better in the future.
The role of computers in telerobotics can be classified
according to how much task-load is carried compared to what
the human operator alone can carry. They can trade or share
control. Trading control includes the following cases:
• The computer replaces the human. It has full control over
the system.
• The computer backs up the human.
• The human backs up the computer.
The most common case in telerobotics is sharing control,
meaning that the human and the computer control different
aspects of the task:
• The computer relieves the human operator from certain
tasks. This is very common in telerobotics when the remote
system performs subtasks according to the plans specified
by the human operator.
• The computer extends the human’s capabilities. This
typically occurs in telerobotics when high precision of
movements and applied forces is required.
3. THE MITSHUBISHI TELEROBOT PROJECT
The System Structure
The telerobot system developed by our research team is
presented in figure 2.
The telerobot system structure consist in two servers, the first
one the local computer (HIC) and the second one the remote
computer (TIC). As a robot we have used a Mitshubishi
Movemaster RV-M1 robot with 5 axes.
The structure of the system is similar as the one presented in
figure 1.
Different kinds of objects are placed on a table, in its
workspace.
The task of the telerobot system is to aquire the scene image
with the objects, to transfer this image to the HIC, than to
calibrate the image, and than the human operator to realise the
3D model of any piece from the visual field, to overlay this
model on the real object, and in this way to obtain the mass
centre position and the orientation of the object.
With this information the robot will be driven via Internet to
pick the object and place it anywhere in the robot workspace.
The scene is observed by a CCD camera (figure 3).
As it can be seen, different kinds of objects (prisms, screws,
nuts, and bushes) are placed on a rectangular grid in the robot
workspace. The images acquired by the CCD camera are
compressed and saved in a file. This image is read from that file
and transferred through Internet, by the communication
software, to the HIC where the operator, using the AR interface,
establishes the position and orientation of each object.
Using this information a command is generated through the soft
and transferred through Internet to the TIC, which command the
telerobot, in order to execute the desired task.
Fig. 2. The telerobot system
Fig. 3. The scene observed by CCD camera
The Communication Software
The communication software technologies are based on Java. A
specific protocol over IP was designed for communication
between the servers. A new task for us is to improve the
protocol to support plug in of new labs to the kernel, in order to
create a network of robots/labs.
The AR Interface
The AR interface has been realized using Matlab software.
In the first step the calibration of the system was made in order
to improve accuracy and usability of the AR Interface.
The system calibration consists in two stages. The first one was
the camera calibration. To solve this problem we use the
Devernay and Faugeras’ technique for lens distortion removal
from structured scenes. The acquired image of the grid is
presented in figure 4.
Fig. 4. The acquired image of the grid
The algorithm consists in the following steps:
• edge extraction on the acquired image as is presented in
figure 5;
Fig. 5. The edge extraction on the acquired image
• the polygonal approximation with a large tolerance on
these edges to extract possible lines from the sequence;
• finding the parameters of the distortion model that best
transform these edges to segments;
• generating the “undistorted image” using the parameters
computed using this algorithm (k - radial distortion term.
cx, cy - x and y coordinates of lens centre expressed as
fraction of the image size relative to the top left corner; s -
apparent aspect ratio.)
Fig. 6. The “undistorted image”
The next step is the interfacing image calibration. The purpose
of this module is to map the two-dimensional coordinates as
shown on the captured image to three-dimensional coordinates
in real space around the grid. The algorithm which simulates the
third coordinate dimension (depth) is based on a single
vanishing point model (figure 7).
Fig. 7. The vanish point model
For the P point with the coordinates ( )PP vu , in the image
system, its coordinates in the real coordinate system are:
gridLengthQO
AOX
im
imP ⋅= (1)
( )
gridLength
BB
CC
BB
AABBCC
Y
im
im
im
imimim
P ⋅
−⋅
⋅⋅−+
=
1'
'lg2
'
''2'1lg
2
(2)
gridLengthQO
PAZ
im
imP ⋅=
' (3)
where im
AO is the distance from point A to point O in pixels
in the image plane, and gridLength is the real length of the grid.
Fig. 4. Two wireframe models overlaid on the objects
After that for each type of object a wireframe model is
generated using geometrical primitives. Using 3D
transformations (translation, rotation and scaling) wireframe
models can now be moved at the desired location.
The dimensions of the object model in the robot’s image plan
are computed through 3D to 2D transformations, considering
the vanish point, thus resulting the object’s model in the image
plane which is overlaid on the object’s image (in the image
plane – figure 4).
The mass centre position and the orientation of the object are
computed through a software procedure and are used to
command the robot.
The Robot Control
Having this information the human operator transfers via
Internet using Java a command to the remote computer; this
transfers it to the robot controller through parallel port. The
telerobot will execute the task.
3. CONCLUSION
The Mitshubishi Telerobot project demonstrates how much an
improved AR interface can increase the performance of a
telerobotic system without having to change any of the
telerobots technical features.
The next step in development of our telerobot system is to
include in the AR interface not only the visual sense, but also
the haptic, using haptic gloves and HMD to command and
control the process.
The project was successful in the development of the AR
interface for the Mitshubishi Telerobot. The objective of the
project was therefore met.
4. REFERENCES
[1] R. T. Azuma, (1997): A Survey of Augmented Reality.
Presence: Teleoperators and Virtual Environments 6, 4
(August 1997), 355-385.
[2] F. Devernay and O. Faugeras. Automatic calibration and
removal of distortion from scenes of structured
environments. In SPIE, volume 2567, San Diego, CA, July
1995
[3] M. Fezani, C. Batouche, A. Benhocine, Study and
Realization of the Basic Methods of the Calibration in
Stereopsis For Augmented Reality, American Journal of
Applied Sciences 4 (3): 297-303, 2007
[4] H. Friz, (1999) Design of an Augmented Reality User
Interface for an Internet based Telerobot using Multiple
Monoscopic Views. Diploma Thesis, Institute for Process
and Production Control Techniques, Technical University
of Clausthal, Clausthal-Zellerfeld, Germany Available at:
http://telerobot.mech.uwa.edu.au
[5] K. H. Y. Fung, B. A. MacDonald and T. H. J. Collett,
Measuring and improving the accuracy of ARDev using
a square grid,
www.araa.asn.au/acra/acra2006/papers/paper_5_45.pdf
[6] Gibson, S., Cook, J., Howard, T., Hubbold, R., Oram, D.:
Accurate camera calibration for off-line, video-based
augmented reality. In: IEEE and ACM International
Symposium on Mixed and Augmented Reality (ISMAR
2002), Darmstadt, Germany (2002)
[7] F. Harald (1998) Design of an Augmented Reality User
Interface for an Internet based Telerobot using Multiple
Monoscopic Views, Diplomarbeit, Institute for Process and
Production Control Techniques, Technical University of
Clausthal Clausthal-Zellerfeld, Germany
[8] K. Jong-Sung, H. Ki-Sang, A recursive camera resectioning
technique for off-line video-based augmented reality,
Pattern Recognition Letters 28 (2007), pp. 842–853
[9] G. Klancar, M. Kristan, R. Karba, Wide-angle camera
distortions and non-uniform illumination in mobile robot
tracking, Robotics and Autonomous Systems 46 (2004),
pp.125–133.
[10] T. H. Kolbe, Augmented Videos and Panoramas for
Pedestrian Navigation, Proceedings of the 2nd Symposium
on Location Based Services & TeleCartography, 2004,
Vienna
[11] B. Nini, m. Batouche, Utilisation D’une Sequence
Pour L’augmentation En Realite Augmentee,
www.irit.fr/recherches/sirv/congres/jig05/nini.pdf
[12] R. Palmer, 2003 Augmented reality and Telerobots,
Honours thesis, University of Western Australia
[13] T. B. Sheridan, (1992): Telerobotics, automation
and human supervisory control.Cambridge, MA: MIT
Press.
[14] H. Schaefer, Kalibrierungen für Augmented
Reality, Diplomarbeit, Fachgebiet Photogrammetrie und
Fernerkundung Technische Universität Berlin, November
2003.
[15] R. Y. Tsai. A versatile camera calibration technique
for high-accuracy 3d machine vision metrology using off-
the-shelf tv cameras and lenses. Robotics and Automation,
IEEE Journal of, 3(4), Aug 1987.