Lowlevelcontrolin semi-autonomousrehabilitation …ola/publications/Luth_ICORR2007.pdfProceedings...

8
Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands Low level control in a semi-autonomous rehabilitation robotic system via a Brain-Computer Interface Thorsten Luith, Darko Ojdanic, Ola Friman, Oliver Prenzel, and Axel Graser Abstract - In this work, a connection between a semi- autonomous rehabilitation robotic system and Brain-Computer Interfaces (BCI) is described. This paper focuses on a system for user intervention in low-level movement control of an assis- tive robotic arm. The rehabilitation robotic system allows tetra- plegics to control the system with high-level commands (e.g., "grab the bottle"), and then to intervene in the execution of the task, if they see that something is going wrong. In such a case, the user gets the opportunity to continue the task with a low- level control of the robot arm. A system for such a control on a low abstraction level by a Brain-Computer Interface based on P300 and steady-state visual evoked potentials (SSVEP) will be described in this work. I. INTRODUCTION THE FRIEND II rehabilitation robotic system is being 1 developed to assist tetra-plegics in daily life tasks and thus to increase their autonomy. FRIEND II is an electrical wheelchair, which is equipped with a 7 degrees-of-freedom (DOF) manipulator as well as additional sensors and actua- tors, such as stereo-camera system mounted on a pan-tilt head and a smart tray with tactile surface and integrated scale (see Fig. 1). The control of the system usually takes place on a high abstraction level, e.g., with user commands like "Pour in a drink" or "Take out bottle from fridge" [1]. Subsequent to the user-initiated task selection on the abstract level, the system plans a sequence of operations that are re- quired to solve a certain task. These operations organize the access of the system's sensors and actuators, but if neces- sary, also involve the user into task execution, according to the principle of semi-autonomy [2]. During task execution, the manipulator has to be capable of performing various operations autonomously. Motion planning algorithms that smoothly drive the manipulator through the environment without collision with obstacles, are already in place [3]. However, due to the uncertainties Manuscript received February 4, 2007. This research project has been supported by a Marie Curie Transfer of Knowledge Fellowship of the Euro- pean Community's Sixth Framework Programme under contract number MTKD-CT-2004-01421 1. The AMaRob project is funded by the BMBF - Federal Ministry of Edu- cation and Research ("Bundesministerium fMir Bildung und Forschung"), Hannoversche Stral3e 28-30, D-101 15 Berlin, Germany. Thorsten Luth is with the Institute of Automation, University of Bremen, Otto-Hahn-Allee, D-28359, Bremen, Germany (corresponding author: phone: +49-421-218-3580; fax: +49-421-218-4596; luethkiat.uni- Darko Ojdanic, Ola Friman, Oliver Prenzel, and Axel Graser are with the Institute of Automation, University of Bremen, Otto-Hahn-Allee, D-28359, Bremen, Germany (e-mail: odanic_iatuni-bremen.de ofrnankiatuni present in the data coming from the sensors (cameras, artifi- cial skin etc.), the estimated positions of objects can differ slightly from the true values. This can cause failures in the task execution, e.g., a grasping process. The user can then take over the control of the manipulator and adjust the grip- per location. Moreover, the user controls the manipulator in an intuitive manner by using Cartesian commands, but the system controls the redundancy of manipulator in parallel (more details are provided in Section III). In this way it sup- ports the user to utilize all the capabilities of the 7-DOF ma- nipulator for more dexterous manipulation in cluttered envi- ronment. After the gripper adjustment by the user, the sys- tem can proceed with the execution of the remaining tasks as planned. In this paper, a solution that gives the user direct control of the robot arm with a brain computer interface is described. Fig. 1. FRIEND 11 sy stem controlled by a Brain-Computer Interface. Brain-Computer Interfaces provide control of software or hardware applications by interpreting patterns in the brain activity of the user [4]. The brain activity exists in different types that can be used in a Brain-Computer Interface (BCI). In this work two types of brain activity patterns are used: the P300 waveform and steady-state visual evoked potentials (SSVEP). The P300 waveform can be measured in the brain about 300 ms after a stimulus expected by the person, has been presented [51. The SSVEP signal arises in the visual cortex when a person is looking at a continuously blinking light source [6]. These signals are in this work measured non-invasively with an electroencephalograph (BEG), i.e., with electrodes attached to the subject's scalp. The paper is organized as follows. In Section II it will be 1-4244-1320-6/07/$25.00 (c)2007 IEEE 721 Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Transcript of Lowlevelcontrolin semi-autonomousrehabilitation …ola/publications/Luth_ICORR2007.pdfProceedings...

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

Low level control in a semi-autonomous rehabilitation roboticsystem via a Brain-Computer Interface

Thorsten Luith, Darko Ojdanic, Ola Friman, Oliver Prenzel, and Axel Graser

Abstract - In this work, a connection between a semi-autonomous rehabilitation robotic system and Brain-ComputerInterfaces (BCI) is described. This paper focuses on a systemfor user intervention in low-level movement control of an assis-tive robotic arm. The rehabilitation robotic system allows tetra-plegics to control the system with high-level commands (e.g.,"grab the bottle"), and then to intervene in the execution of thetask, if they see that something is going wrong. In such a case,the user gets the opportunity to continue the task with a low-level control of the robot arm. A system for such a control on alow abstraction level by a Brain-Computer Interface based onP300 and steady-state visual evoked potentials (SSVEP) will bedescribed in this work.

I. INTRODUCTION

THE FRIEND II rehabilitation robotic system is being1 developed to assist tetra-plegics in daily life tasks and

thus to increase their autonomy. FRIEND II is an electricalwheelchair, which is equipped with a 7 degrees-of-freedom(DOF) manipulator as well as additional sensors and actua-tors, such as stereo-camera system mounted on a pan-tilthead and a smart tray with tactile surface and integratedscale (see Fig. 1). The control of the system usually takesplace on a high abstraction level, e.g., with user commandslike "Pour in a drink" or "Take out bottle from fridge" [1].Subsequent to the user-initiated task selection on the abstractlevel, the system plans a sequence of operations that are re-quired to solve a certain task. These operations organize theaccess of the system's sensors and actuators, but if neces-sary, also involve the user into task execution, according tothe principle ofsemi-autonomy [2].

During task execution, the manipulator has to be capableof performing various operations autonomously. Motionplanning algorithms that smoothly drive the manipulatorthrough the environment without collision with obstacles,are already in place [3]. However, due to the uncertainties

Manuscript received February 4, 2007. This research project has beensupported by a Marie Curie Transfer of Knowledge Fellowship of the Euro-pean Community's Sixth Framework Programme under contract numberMTKD-CT-2004-01421 1.

The AMaRob project is funded by the BMBF - Federal Ministry of Edu-cation and Research ("Bundesministerium fMir Bildung und Forschung"),Hannoversche Stral3e 28-30, D-101 15 Berlin, Germany.

Thorsten Luth is with the Institute of Automation, University of Bremen,Otto-Hahn-Allee, D-28359, Bremen, Germany (corresponding author:phone: +49-421-218-3580; fax: +49-421-218-4596; luethkiat.uni-

Darko Ojdanic, Ola Friman, Oliver Prenzel, and Axel Graser are with theInstitute of Automation, University of Bremen, Otto-Hahn-Allee, D-28359,Bremen, Germany (e-mail: odanic_iatuni-bremen.de ofrnankiatuni

present in the data coming from the sensors (cameras, artifi-cial skin etc.), the estimated positions of objects can differslightly from the true values. This can cause failures in thetask execution, e.g., a grasping process. The user can thentake over the control of the manipulator and adjust the grip-per location. Moreover, the user controls the manipulator inan intuitive manner by using Cartesian commands, but thesystem controls the redundancy of manipulator in parallel(more details are provided in Section III). In this way it sup-ports the user to utilize all the capabilities of the 7-DOF ma-nipulator for more dexterous manipulation in cluttered envi-ronment. After the gripper adjustment by the user, the sys-tem can proceed with the execution of the remaining tasks asplanned. In this paper, a solution that gives the user directcontrol of the robot arm with a brain computer interface isdescribed.

Fig. 1. FRIEND 11 system controlled by a Brain-Computer Interface.

Brain-Computer Interfaces provide control of software orhardware applications by interpreting patterns in the brainactivity of the user [4]. The brain activity exists in differenttypes that can be used in a Brain-Computer Interface (BCI).In this work two types of brain activity patterns are used: theP300 waveform and steady-state visual evoked potentials(SSVEP). The P300 waveform can be measured in the brainabout 300 ms after a stimulus expected by the person, hasbeen presented [51. The SSVEP signal arises in the visualcortex when a person is looking at a continuously blinkinglight source [6]. These signals are in this work measurednon-invasively with an electroencephalograph (BEG), i.e.,with electrodes attached to the subject's scalp.

The paper is organized as follows. In Section II it will be

1-4244-1320-6/07/$25.00 (c)2007 IEEE 721

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

illustrated how the BCI-driven direct control of the manipu-lator smoothly integrates into the overall control concept.Section III describes the mathematical background for themanipulator direct control and motion planning, as well ashow the system utilizes the redundancy in the direct control.The graphical user interface is presented in Section IV, andthe P300 as well as the SSVEP is explained in detail in Sec-tion V. For experimental evaluation of the presented methoda typical support task will be used in this work: Grasping ofan object. The experimental results are presented in SectionVI.

II. OVERALL CONTROL CONCEPT

The control architecture M\ASSiVE (Multi-layer architec-ture for semi-autonomous service-robots with verified taskexecution [7]) has been developed for service robotic sys-tems like FRIEND II. An important characteristic of MAS-SiVE with relevance to this paper is its input device abstrac-tion concept [8]. This enables the adaptation of any inputmedia (Speech, Eye-mouse, BCI, etc.). Additional importantfeatures of M\ASSiVE are the operation on the basis ofpre-structured task-knowledge as well as the semi-autonomoustask execution. These design aspects are justified due to theapplication area of rehabilitation robotics.

Task Selection by User

Initial Monitoring

InitSituatioKn n

[yes]

Task Planning

Resl

[abo0]

FindObjects

TargetRecd

AutonomousExecutio

Reult

Userinteraction e rt

_ll

Fig. 2, autonomous operations are executed first, and in caseof their failure, user interactions are initiated. A continuousplanning process in the system's execution layer controls theactivation and deactivation of operations according to thereturn values of low-level operations [10].

iMANIPULATOR: MoveToObjeltIN- Manirpulatorr iOed3 ApprahFrame,

GoalFrame, MPParamPre. MPParamGoalOUT -

lReurce(*): Rdbotarm

LIN- ManipulatorOUT - Su1ss TResource(: Grpper

Fig. 3: Section of a process structure with user interaction.

However, as also shown in Fig. 2, user interference is possi-ble at any time during autonomously executed operations.Two types of interaction are available: Complete abortion ofthe current task as well as interruption of current operations.The availability of the latter option is determined on the ba-sis of process-structures, which belong to the mentioned pre-structured task knowledge. A simplified section of a process-structure is depicted in Fig. 3 in the form of a function blockdiagram. The blocks describe the low level operations withrespect to their name, type (e.g. manipulative operation oruser interaction), parameters, required resources and avail-able return values. It is shown that the autonomous manipu-lator operation "MoveToObject" can terminate successfully,erroneously (return value "Failure") or can be interrupted bythe user (return value "UserTakeOver"). In the latter case,the "FineAdjustToObject" operation is started, which is, asindicated by the operation type "USER", coordinated by thesystem's human machine interface (HMI). H4MI coordinationmeans that an interaction dialog is shown to the user, whichis parameterized by pre-structured task knowledge accordingto the current task execution context. Furthermore, con-straints for the user interaction are determined, such as thecontext dependent configuration of input devices as well asrestrictions, which actuators to be controlled by the user andhow to control them [2]. The principle of BCI controlled fineadjustments of the manipulator will be introduced in the fol-lowing section.

Fig. 2: Activity diagram of task execution in MASSiVE.

Fig. 2 depicts the activity diagram of task planning andexecution in 1\ASSiVE. Initially, the task request from theuser is received. According to the task selection, pre-structured task knowledge [7] is retrieved from the databaseand initialized within a semi-autonomous monitoring proc-ess [9]. Subsequently, task relevant information about thetarget objects to be manipulated as well as about obstacles isavailable in the system's world model for further steps oftask execution. After the initial task planning, as depicted in

III. MANIPULATOR MOTION

The option to control the manipulator directly with a BCIwill be used only for small movements, e.g., for fine adjust-ments of the gripper location in the case the sensors do notprovide enough precise position information. In the exampleshown in Fig. 3, direct robot control is a part of the operation"FineAdjustToObject". Other movements, e.g., moving thegripper to the proximity of the object, are controlled by mo-tion planning algorithms, i.e., the operation "MoveToOb-ject".

1-4244-1320-6/07/$25.00 (c)2007 IEEE 722

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

A. Motion planningIn contrast to the available methods that mainly use the

configuration space, leading to large calculation times forrobots with many DOF, the motion planning approach usedin the FRIEND II system is based on Cartesian space infor-mation. The motion planning algorithm and its implementa-tion is described in [12]. The method operates fast, but sinceit is a local planner it is not able to avoid local minima(dead-ends) in the robot workspace. In order to solve theproblem of local minima, a global planner is currently beingdeveloped which will be combined with the local plannerand a trajectory smoothing procedure [3].

Although the primary objective of motion planning is toreach the goal location without collision, the redundancy ofthe robot should be also taken into account. That is, if themanipulator reaches the target location, its configurationshould be as far as possible from the joint limits and obsta-cles. This makes the ensuing movements simpler, regardlessof who executes them: the user or the system. Moreover, theobject to be grasped often has the shape of a cylinder (bottle,glass, handles, etc.). This introduces additional degree ofredundancy since the grasping of the object can be donefrom several directions.

TG4

Gr-ipper T _o _ .~ Nec

T,;,

Fig. 4. Calculating additional goal frames for a cylindrical target object.

A method that utilizes this redundancy is integrated intothe local planner. First of all, additional grasping frames arecalculated by incrementally rotating the given goal frameabout the object axis. The frames are determined as depictedin Fig. 4, where the angle p is a predefined parameter. In thisexample the axes of the cylinder is parallel to the z-axis ofthe world coordinate system, meaning that the object isplaced vertically on a platform. The corresponding matrixcalculation for additional goals is given by the equation

TGi = To .Rot(z,iq) To TG (1)

where To is the object frame, TG is the gripper goalframe, and i +1, +2, ..., +N. These additional frames arealso considered in motion planning. Therefore, if it happensthat the given goal frame is not reachable due to the joint

limits or obstacles, other frames might be reachable and themanipulator will be able to approach the target object.

B. Direct controlThe direct control of the manipulator or robot arm can be

performed either in the workspace coordinate systems or theconfiguration space. The workspace represents the Cartesianspace in which the robot moves. The configuration space isthe set of all possible configurations of a manipulator, whereone configuration represents the set of joint values. Operat-ing the robot in the configuration space can be confusing tothe user and for fine movements it is common practice touse the workspace coordinate system for direct control.

1) Workspace commandsThere are two different coordinate systems usually used

for manipulators: the World and the Gripper coordinate sys-tems. The World coordinate system has a fixed coordinateorigin and axes that are usually placed at the base of the ma-nipulator. The Gripper coordinate system is attached to thegripper (see Fig. 7). In some situations the manipulator iseasier controlled in the World coordinate system and in othersituations the Gripper coordinate system is preferred. In thiswork, both options are implemented, so that the user caneasily switch between them. Determining which coordinatesystem is the more suitable for control strongly depends onthe user's preferences, as well as on the particular task. Atypical example where the use of the Gripper coordinatesystem is preferred, is the case of grasping an object whenthe gripper is located directly in front of the object. The userwould like to perform a straight-line motion in respect to thegripper, which corresponds to the z-axes of the Gripper co-ordinate system. On the other hand, the FRIEND II systemhas the gripper with the fingers slightly directed to the side(imitating the human hand, see Fig. 5). This characteristiccan sometimes create a situation where the World coordinatesystem is preferred.

A 3D workspace is fully described with six coordinates.Three coordinates define the translation (x, y, z) and otherthree the rotational part of the manipulator. Rotation is typi-cally described with the roll, pitch and yaw angles, repre-senting the rotations about the coordinate axes. As is usual inrobotics, the homogenous transformation matrices (4x4) areused for describing the transformation between coordinate

1-4244-1320-6/07/$25.00 (c)2007 IEEE 723

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

systems [ 1]. The transformation of the gripper coordinatesystem with respect to the world coordinate system will herebe denoted gripperframe. Suppose that the user controls thetranslation of the gripper (the dialog is presented in Fig. 7)and he selects x-axes with the step size s, the correspondingtransformation matrix can then be written

1 0 0 d s0 1 0 0

TT = Trans(d s,0,0) = O O 1 ° (2)

0 0 0 1

where d defines the selected direction of the motion (±1). Ifthe user operates in a Gripper coordinate system, the nextgoal frame for the gripper is

TGnew =TG 1TT (3)

The matrix TG represents the current gripper frame. In

case the user is operating in the World coordinate system,the transformations have to be multiplied from left to right,which results in the different goal frame

TGnew =TT TG (4)

If the user wants to perform a rotation in the Gripper co-ordinate system, for example about the y-axes for the angu-lar step size 8is in the direction d, the corresponding goalframe is calculated as

TGnew= TG * TROt = TG Rot(y,d As)=cos(d fis) 0 sin(d /

0 1 0TG - sin(d iA) 0 cos(d .j

0 0 0

000

1-

(5)

The goal frame for the rotation about the other axes is cal-culated in similar ways, but of course with the appropriatematrix TROt. The resulting frame for the World coordinate

system is calculated analogously to the translation exampleabove. The velocity of the motion is in this work predefinedand constant. During the experiment phase this has beenproven appropriate. If further investigations show that dif-ferent velocities are required, the interaction dialog could beexpanded with one more control input.

2) Utilizing redundancyAfter the goal frame is defined, the inverse kinematics so-

lution is calculated. In other words, the joint angles that cor-respond to the desired gripper location are determined. Since

the manipulator of the FRIEND II system is redundant thereis no unique solution to the inverse kinematics problem. Theset of configurations that match one gripper location form amotion of the elbow on the so-called redundancy circle (seeFig. 6). The inverse kinematics solution for the 7 DOF ma-nipulator developed at the Institute of Automation at theUniversity of Bremen [1] can efficiently calculate the result-ing configuration for a given angle on the redundancy circle.The user will use the direct control only for some small

movements. Hence, it is reasonable to keep the angle on theredundancy circle at the same value. However, in some par-ticular situations it can happen that for a given angle on theredundancy circle, the resulting configuration lies outsidethe joint limits. In such a case, the manipulator is not able toreach the desired location. Using a slightly modified redun-dancy angle, e.g., by adding a small additional movement ofthe elbow, could produce a valid configuration. This is atypical example illustrating the advantage of using redundantmanipulators as the redundancy provides more alternativesfor the movement.

K V J ~~redndancy m-...cel

Fig. 6. Movement of the elbow on the redundancy circle. ix is the angleon the redundancy circle corresponding to one configuration.

In the direct control of the manipulator using a BCI pre-sented in this work, the system is continuously supportingthe user by utilizing the redundancy of the robot arm: Whena new configuration is not valid; the joint values are tooclose to the joint limits; or the minimal distance to obstaclesis too small, the system will scan the redundancy circle tofind out if there is a better configuration available. This ap-proach enables the user to control the manipulator in an in-tuitive manner while the system supports him by fully utiliz-ing the capabilities of the manipulator.

IV. GRAPHICAL USER INTERFACE

According to Section II, if an autonomous high level op-eration fails or the user interrupts ongoing wrong operations,user interactions are initiated (see Fig. 2). In the following,the graphical user interface for direct control of the robotarm is presented, which realizes the manipulative user inter-action in case of user interruption of an erroneous autono-mous manipulation. How the user selects an option in theinterface is explained in Section V.The main window of the interface, the user is working

with, will be the coordinate control. So the first part of the

1-4244-1320-6/07/$25.00 (c)2007 IEEE

Bs)

Als)

724

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

whole graphical user interface presented to the user is theManipulation window (Fig. 7). The second window (Settings- Fig. 8) will only be visible, when the user wants to changethe settings like the coordinate system or the kind of move-ment (translation or rotation), or when the user wants to endthe direct control.

A. Manipulation windowThe main graphical interface presented to the user consists

of five areas (numbered in Fig. 7). The "Manipulation" area(1) and the "Step" area (2) are dynamic areas, according tothe settings the user can change in the Settings window.

The initial configuration of the Manipulation window isdependent on the task context (see Section III.B. 1) and onthe user preferences. According to the overall control con-cept as described in Section II the context for the task isgiven in the system. For the example of fine adjustments ofthe manipulator in case of grasping an object (see Fig. 3 -

MoveToObject), the initial configuration is the World coor-dinate system (see Fig. 8) and the 'X' coordinate. This con-figuration is set according to experimental results in thiswork (see Section VI). An image, with the actual coordinatesystem drawn, is presented to the user in the upper right cor-ner. By choosing "Plus" (3) or "Minus" (4) the user can adda positive or negative step to the selected "Manipulate" vari-able in the actual coordinate system.

If the user wants to change the manipulation settings, hehas to go to the control settings interface, by selecting "Set-tings" (5). According to the settings the user made in thatdialog (see Fig. 8), the appearance of the Manipulation win-dow changes. If the user chose "Translation" in the Settingswindow, the Manipulation window contains the "X", "Y",and "Z" coordinates and the "Step" in "cm". On the otherhand, if the user chose "Rotation" in the Settings window,the Manipulation window contains "Yaw", "Pitch", and"Roll" as well as the "Step" in "degrees". After finishing thedirect control, the user has to go to the Settings window toend the direct control.

B. Settings windowAfter selecting "Settings" in the Manipulation window,

the user gets to the Settings window (Fig. 8). This window isalso consisting of five areas, the user can select. The upperpart of the interface is for the preparation of the direct robotcontrol. The user can change different settings, i.e. whichkind of movement he wants to perform (translation or rota-tion - 1) and in which coordinate system the manipulationshall be made (Gripper or World coordinate system - 2). Thecoordinate system the user chose is presented to him as animage in the upper right corner. After these settings, the usercan go back to the direct control interface (see Fig. 7) byselecting the "Back to Control" area (5). When the user hasfinished the direct control, he has to decide if he was able tomove the robot in the right way (3) or not (4). On a successthe robot is performing the rest of the task autonomously andon a failure the system aborts the task completely.

Fig. 6. Settings window O mte grapnical user intertace. I ne image in theupper right corner shows the actual coordinate system as selected in the"Coordinate-System" area (2). In this figure, the highlighting of an areain the P300-based BCI is also shown (area 1).

V. BRAIN-COMPUTER INTERFACE

A Brain-Computer Interfcae is realized with EEG equip-ment and software for analyzing the EEG signals in realtime. For the current work, the biosignal amplifier USBampfrom g.tec is used to amplify the brain signals acquired usingEEG gold electrodes. The acquired signals are digitized witha sampling rate of 128 Hz and further processed on a regularlaptop computer. Custom written software for signal analysisand classification has been developed in Visual C++. Thissoftware can detect either the P300 signal or the SSVEPresponse. More information on these brain signal patternsand their detection is given below.

Fig. 7. Manipulation window of the graphical user interface. The imagein the upper right corner shows the actual coordinate system. In this fig-ure, the highlighting of an area in the P300-based BCI is also shown(area 3).

1-4244-1320-6/07/$25.00 (c)2007 IEEE 725

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

A. P300-based BCI

P3U spatial distribution

P300 responsi

..

~~~~~~.......

.

0 0.1 0.2 0.3Time (5)

Fig. 9. Top panel: The P300 spatial distributi(panel; The temporal P300 response as measureand a parametric model h(t) used for detectionconsists ofthree Gaussian bumps of different sh

several times (5-10 times depending on the user) before areliable detection can be made [15]-[18].

B. SSVEP-basedBCLWhen a person is looking on a light source blinking or

flickering with a certain frequency, this frequency is alsoreflected in the neuronal activity in the visual cortex. This isknown as the Steady State Visual Evoked Potential (SSVEP)

Low P300 response, and it can be detected as sharp frequency peaks inthe EEG signals [6]. Therefore, an SSVEP-based selectioncan be realized by having an array of light sources where

e each light source is flickering with a distinct frequency.... Each light source generates different frequency peaks in the

EEG signals and one can therefore classify which lightsource the person is looking at by analyzing the frequencycontent, which usually is done via a Fast Fourier Transform.In [19], an algorithm for detecting the SSVEP response inmultiple EEG signals acquired at different scalp positions

0.4 0.5Os was presented. This algorithm is used in this work for detect-

0*4 0.5 0.6 ing the SSVEP signals. To operate the dialogs in Section IV,five different frequencies are needed to give the user the

Lon in the brain. Bottomed with EEG (thin line), opportunity to choose all of possible selections. A custom(thick line). This model made array with Light Emitting Diodes is used for this pur-ifts and amplitudes. pose, see Fig. 10.

The P300 response is a post-stimulus waveform occurringin the brain about 300 ms after the presentation of an antici-pated stimulus [13],[14]. The temporal shape and spatialdistribution of the P300 response are shown in Fig. 9. Spa-tially, the P300 response is found in the parietal lobe, i.e., attop of the head. That is, to detect the P300 response, theEEG electrodes must be placed over this area. Since theshape of the P300 signals is approximately known it can bedetected with a matched filter. In the bottom panel of Fig. 9(bold line) the matched filter is parameterized as a sum ofthree Gaussian bumps with different amplitudes and tempo-ral shifts:

h(t) = Ale (2 + A2e (2 + A3e a2 (6)These parameters can be optimized to match the P300 re-

sponse to individual users as well as possible. The P300-based BCI now works as follows. The background frames ofthe 5 different fields of the dialogs in Fig. 7 and Fig. 8 arehighlighted in random order. The fields are highlighted bychanging the background of the field to a red color for 100ms. The time between two highlights is 300 ms. The user isfocusing the attention on the field containing the commandhe or she wants to execute. Every time this field is high-lighted, a P300 response is elicited. Thus, when the matchedfilter detection procedure indicates the presence of a P300response in the EEG signals (by convolving the matchedfilter with the EEG signals), one can infer that the user wantsto execute the command that was highlighted 300 ms priorto the response detection. Since the P300 response is weakand the EEG signals noisy, each field must be highlighted

Flg. 10. Five light sources with dillerent blinking trequencies are usedfor the selection ofan option in the user interface.

The diodes in this array are flickering with the frequencies13, 14, 15, 16 and 17 Hz. Each diode encodes a certaincommand in the dialog windows, as described in Table I.That is, when a peak at one of the above frequencies is de-tected in the EEG signals one can conclude that the user islooking at the diode flickering with this frequency, and theassociated command is issued.

TABLE ISSVEP-FREQUENCIES AND CORRESPONDING SELECTIONS

Corresponding selection inManipulation win-

Diode Frequency Settings Window dow1 13 Hz Manipulation of Manipulation of2 14 Hz Coordinate-System Step Length

3 15 Hz To Direct Control... To Settings Win-dow...

4 16 Hz Result: Success Positive Direction5 17 Hz Result: Failure Negative Direction

1-4244-1320-6/07/$25.00 (c)2007 IEEE 726

Pf"4.uId-F.2

o-4

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

VI. RESULTS

The goal in this work is to allow the user to adjust the po-sition of the manipulator and gripper in the FRIEND II sys-tem via a BCI. To optimize this procedure, a number ofquestions need to be investigated. For example, what coor-dinate system should be used as default (World or Gripper)?What precision is required? How many adjustment steps areneeded? These questions are examined below. There are alsoquestions relating to the BCI, for example, how long does ittake to make one decision and what is the risk of interpretingthe brain patterns erroneously? These questions have beenexamined in detail elsewhere, see for example [19],[20]. Ingeneral, the SSVEP-based BCI is the more robust BCI witha false-positive classification rate usually under 500 and withthe ability to issue commands every 2-4 seconds, see[19],[21],[22]. The P300 BCI is slower but on the other handuseful for a larger group of people since no eye-movementsare required. The P300 BCI also integrates better in thegraphical user interface. Upon acceptance, an on-site dem-onstration of the BCI and direct control of the FRIEND IIsystem will be given at the ICORR'07 conference.

A. Experimental setup8 healthy persons (1 female) in the ages between 20 and

35 years were given the task to position the gripper and tograb a bottle using the dialog in Fig. 7. Thus, the user had toselect along which axis to move the robot and with whichstep length (4, 2 or 1 cm). The total number of selectionsrequired (i.e., number of clicks in the dialog) to place thegripper into the correct position was counted. This task wascarried out for 4 different starting positions of the gripper, allabout 10 to 15 cm from the target bottle, see Fig. 5. Eachtask was also performed using both the World coordinatesystem and the Gripper coordinate system. To focus thisexperiment on the difficulties in controlling the robot armdirectly, the users used a computer mouse for the selectionsinstead of the BCI. The time for making a selection is there-fore assumed negligible and the variability that would comefrom different selection times and errors for different per-sons with the BCI is avoided. As mentioned above, the se-lection times using the BCI have been studied in detail else-where.

B. Number ofselectionsIn Table II, the number of selections the users made to

grab the bottle, averaged over all four starting positions ofthe gripper, is reported. The selections for the Gripper coor-dinate system (G) and World coordinate system (W) are re-ported separately. Most of the subjects had to make moreselections in the Gripper coordinate system (statistical sig-nificance p=0.025 with a Student's t-test). Since the usershad no prior experience with this task, it can be assumed thatfewer selections will be required after some practice. Animportant observation was also that the available steplengths seem to be adequate; a finer resolution than 1 cm

was not required to grab the bottle and step lengths of more4 cm are difficult to handle for the user. Depending on theuser, the execution of one selection takes around 2 to 5 sec-onds with the BCI. Thus, based on the numbers in Table II,the total time for grabbing the bottle can be estimated to be-tween 30 seconds to 2 minutes plus the time to consider themoves and selections (i.e., along which axis the next stepshould be and how long it should be). In the experiment, itwas found that this latter deliberation time generally far ex-ceeded the time for actually carrying out the selections witha BCI.

TABLE IIMEAN VALUES OF SELECTIONS THE SUBJECTS HAD TO MAKE TO EXECUTE

THE TASKS IN GRIPPER (G) AND WORLD (W) COORDINATE SYSTEM

1 2 3 4 5 6 7 8G 20.25 17.75 21.50 15.75 17.50 23.00 24.00 20.00W 20.00 20.50 16.75 11.75 14.75 16.25 18.50 15.75

VII. DIscussIoN & CONCLUSION

The presented material aims to bridge two lines of workthat hitherto have been developed independently; on the onehand the autonomous robot rehabilitation system FRIEND IIand the software structure MASSiVE, on the other hand theBrain-Computer Interface technology. The application cho-sen for this is direct control of the manipulator in theFRIEND II system using a Brain-Computer Interface. Thisfunctionality is now also supported in the MIASSiVE soft-ware structure. A number of situations where the user can ormust operate the gripper directly can be foreseen. For exam-ple, a high level gripping task can be designed so that theautonomous motion planning drives the gripper to an ap-proximately correct position and then hands over the fineadjustments to the user. Another option is that the systemitself detects that it needs guidance from the user, e.g., if themotion planning fails. Yet another possibility is that the userstops the system and takes over the control if he or she rec-ognizes that something is about to go wrong. Thepossibility to resolve simple problems is important to theoverall goal of providing independence to the handicappeduser.

Using a BCI for the direct control ofthe gripper makes thesystem potentially useful for people with a wide range ofhandicaps. However, at the present time, the BCI solutionshould be seen as an alternative only for people with severedisabilities. For example, users who can operate an SSVEP-based BCI can presumably also operate an eye-tracker. Therelative merits of these two alternatives have still to beevaluated. The P300 BCI also proposed in this work doesnot rely on the user's ability to move the eyes, and it is there-fore in this respect a more general solution. It also integratesmore seamlessly into system because the external stimulus isrealized as highlights directly in the dialogs shown on thecomputer monitor. This is not easily done with the repetitiveSSVEP stimulus since it then must be synchronized with theupdate frequency of the monitor. The array of external light

1-4244-1320-6/07/$25.00 (c)2007 IEEE 727

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.

Proceedings of the 2007 IEEE 10th International Conference on Rehabilitation Robotics, June 12-15, Noordwijk, The Netherlands

emitting diodes shown in Fig. 10 is for this reason employedin this work.

The optimal integration of the BCI with the FRIEND IIsystem is still being investigated. The results presented inthe Results section indicate that the users prefer controllingthe robot in the World coordinate system, at least when theyare not trained for the gripping task. A precision of 1 cmsteps was also found sufficient. There are many more ques-tions that require investigation. For example, how is thecomputer screen or the LED array placed in relation to therobot workspace so that the user can see both at the sametime? Are the dialogs presented in Fig. 7 and Fig. 8 optimalwith respect to the uncertainty inherent in BCI, i.e., to therisk of interpreting the brain patterns erroneously and send-ing the wrong command? Finally, and most importantly, is adisabled user able to use the system? So far it has only beentested on healthy volunteers.

REFERENCES

[1] 0. Prenzel, J. Feuser, I. Volosyak, 0. Ivlev, and A. Graser, "TaskExecution with the Rehabilitation Robotic System FRIEND II in Intel-ligent Home Environment", International Journal ofAssistive Robot-ics and Mechatronics, IJARM, vol. 7, no. 2, 2006

[2] 0. Prenzel, C. Martens, M. Cyriacks, C. Wang, and A. Graser, "Sys-tem Controlled User Interaction within the Service Robotic ControlArchitecture MASSiVE", ROBOTICA, Special Issue on Mobile Ma-nipulators: Basic Techniques, New trends & Applications, to be pub-lished

[3] D. Ojdanic, and A. Graser, "A Fast Motion Planning for a 7DOFRehabilitation Robot", IEEE 9th International Conference on Reha-bilitation Robotics, ICORR 2007, Nederland, submitted.

[4] J. R. Wolpaw, N. Birbaumer, D. J. McFarland, G. Pfurtscheller, and T.M. Vaughan, "Brain-computer interfaces for communication and con-trol", Clin. Neurophysiol., vol. 113, no. 6, pp. 767-791, 2002

[5] C. Duncan-Johnson and E. Donchin, "On quantifying surprise: thevariation of event-related potentials with subjective probability", Psy-chophysiology, vol. 14, pp. 456-467, 1977

[6] C. S. Herrmann, "Human EEG responses to 1-100 Hz flicker: reso-nance phenomena in visual cortex and their potential correlation tocognitive phenomena", Exp. Brain Res., vol. 137, pp. 346-353, 2001

[7] C. Martens, 0. Prenzel, J. Feuser, and A. Graser, "MASSiVE: Multi-Layer Architecture for Semi-Autonomous Service-Robots with Veri-fied Task Execution", Proc. 10th Int. Conf on Optimization of Elec-trical and Electronic Equipments (OPTIM), Transilvania UniversityPress, Brasov, vol. 3, issue 1, pp. 107-112, ISBN 9-7365-3705-8, 2006

[8] C. Martens, D.-J. Kim, J.-S. Han, A. Graser, and Z. Z. Bien, "Conceptfor a Modified Hybrid Multi-Layer Control-Architecture for Rehabili-tation Robots", Proc. 3rd Int. Workshop on Human-friendly RoboticSystems, Daejeon, Korea, January 21-22, 2002

[9] 0. Prenzel, "Semi-Autonomous Object Anchoring for Service-Robots", Methods and Applications in Automation, pp. 57-68, ISBN3-8322-4502-2, 2005

[10] C. Martens, and A. Graser, "Design and Implementation of a DiscreteEvent Controller for High-level Command Control of RehabilitationRobotic Systems", Proc. IEEEIRSJ Int. Conf on Intelligent Robotsand Systems, Lausanne, Switzerland, 30 Sept - 4 Oct, 2002

[11] R. P. Paul, Robot manipulators: mathematics, programming and con-trol, MIT Press, Cambridge, 1981

[12] D. Ojdanic, 0. Ivlev, and A. Graser, "A new Fast Motion PlanningApproach for Dexterous Manipulators in 3D-Cartesian Space," Proc.ISR-Robotik Joint Conf on Robotics, Munich, Germany, 2006

[13] L. A. Farwell and E. Donchin, "Talking off the top of your head:toward a mental prosthesis utilizing event-related brain potentials",Electroencephalogr. and clin. Neurophysiol., vol. 70, pp. 510-523,1988

[14] E. Donchin, K. M. Spencer, and R. Wijesinghe, "The Mental Prosthe-sis: Assessing the Speed of a P300-Based Brain-Computer Interface",IEEE Trans. on Rehab. Eng., vol. 8, no. 2, pp. 174-179, 2000

[15] U. Hoffmann, G. Garcia, J.-M. Vesin, K. Diserens, and T. Ebrahimi,"A Boosting Approach to P300 Detection with Application to Brain-Computer Interfaces", in Proc. 2nd Int. IEEE EMBS Conf NeuralEng., pp. 97-100, 2005

[16] M. Kaper and H. Ritter, "Generalizing to new subjects in brain-computer interfacing", in Proc. 26th Annu. Int. Conf IEEE Eng. InMed. and Biol. Soc., pp. 4363-4366, 2004

[17] N. Xu, X. Gao, B. Hong, X. Miao, S. Gao, and F. Yang, "BCI Compe-tition 2003 - Data Set Ilb: Enhancing P300 Wave Detection UsingICA-Based Subspace Projections for BCI Applications", IEEE Trans.Biomed. Eng., vol. 51, no. 6, pp. 1067-1071, 2004

[18] M. Thulasidas, C. Guan, and J. Wu, "Robust Classification of EEGSignal for Brain-Computer Interface", IEEE Trans. Neural Sys. andRehab. Eng., vol. 14, no. 1, pp. 24-29, 2006

[19] 0. Friman, I. Volosyak, and A. Graser, "Multiple Channel Detectionof Steady-State Visual Evoked Potentials for Brain-Computer Inter-faces", IEEE Trans. on Biomed. Eng., to be published

[20] H. Serby, E. Yom-Tov, and G. F. Inbar, "An Improved P300-BasedBrain-Computer Interface", IEEE Trans. Neural Sys. and Rehab. Eng.,vol. 13, no. 1, pp. 89-98, 2005

[21] 0. Friman, T. Luith, I. Volosyak, and A. Graser, "Spelling withSteady-State Visual Evoked Potentials", IEEE Trans. Biomed. Eng.,submitted

[22] Y. Wang, R. Wang, X. Gao, B. Hong, and S. Gao, "A Practical VEP-Based Brain-Computer Interface", IEEE Trans. Neural Sys. and Re-hab. Eng., vol. 14, no. 2, pp. 234-239, 2006

1-4244-1320-6/07/$25.00 (c)2007 IEEE 728

Authorized licensed use limited to: STAATS U UNIBIBL BREMEN. Downloaded on August 5, 2009 at 04:34 from IEEE Xplore. Restrictions apply.