Gaussian Process Autoregression for Simultaneous ... · cosmetic prostheses. In upper arm...

17
IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017 1785 Gaussian Process Autoregression for Simultaneous Proportional Multi-Modal Prosthetic Control With Natural Hand Kinematics Michele Xiloyannis, Constantinos Gavriel, Andreas A. C. Thomik, and A. Aldo Faisal Abstract Matching the dexterity, versatility, and robust- ness of the human hand is still an unachieved goal in bionics, robotics, and neural engineering. A major limita- tion for hand prosthetics lies in the challenges of reliably decoding user intention from muscle signals when con- trolling complex robotic hands. Most of the commercially available prosthetic hands use muscle-related signals to decode a finite number of predefined motions and some offer proportional control of open/close movements of the whole hand. Here, in contrast, we aim to offer users flexible control of individual joints of their artificial hand. We pro- pose a novel framework for decoding neural information that enables a user to independently control 11 joints of the hand in a continuous manner—much like we control our nat- ural hands. Toward this end, we instructed six able-bodied subjects to perform everyday object manipulation tasks combining both dynamic, free movements (e.g., grasping) and isometric force tasks (e.g., squeezing). We recorded the electromyographic and mechanomyographic activities of five extrinsic muscles of the hand in the forearm, while simultaneously monitoring 11 joints of hand and fingers using a sensorized data glove that tracked the joints of the hand. Instead of learning just a direct mapping from current muscle activity to intended hand movement, we formulated a novel autoregressive approach that combines the context of previous hand movements with instantaneousmuscle activ- ity to predict future hand movements. Specifically, we eval- uated a linear vector autoregressive moving average model with exogenous inputs and a novel Gaussian process (GP ) autoregressive framework to learn the continuous mapping from hand joint dynamics and muscle activity to decode intended hand movement. Our GP approach achieves high Manuscript received November 9, 2015; revised June 30, 2016; accepted November 11, 2016. Date of publication August 31, 2017; date of current version October 23, 2017. This work was supported in part by the Luxembourg Research Fund under Grant AFR1229297, in part by the EPSRC, and in part by the eNHANCE Project as part of European Union’s Horizon 2020 Research and Innovation Programme under Grant 644000. (Corresponding author: A. Aldo Faisal.) M. Xiloyannis, C. Gavriel, and A. A. C. Thomik are with the Brain and Behaviour Laboratory, Department of Bioengineering and Department of Computing, Imperial College London, London SW7 2AZ, U.K. A. A. Faisal is with the Brain and Behaviour Laboratory, Department of Bioengineering and Department of Computing, Imperial College London, London SW7 2AZ, U.K., also with the Data Science Institute, Imperial College London, London SW7 2AZ, U.K., and also with the MRC London Institute of Medical Sciences, London W12 0NN, U.K. (e-mail: [email protected]). Digital Object Identifier 10.1109/TNSRE.2017.2699598 levels of performance (RMSE of 8°/s and ρ = 0.79). Crucially, we use a small set of sensors that allows us to control a larger set of independently actuated degrees of freedom of a hand. This novel undersensored control is enabled through the combination of nonlinear autoregressive con- tinuous mapping between muscle activity and joint angles. The system evaluates the muscle signals in the context of previous natural hand movements. This enables us to resolve ambiguities in situations, where muscle signals alone cannot determine the correct action as we evalu- ate the muscle signals in their context of natural hand movements. GP autoregression is a particularly powerful approach which makes not only a prediction based on the context but also represents the associated uncertainty of its predictions, thus enabling the novel notion of risk- based control in neuroprosthetics. Our results suggest that GP autoregressive approaches with exogenous inputs lend themselves for natural, intuitive, and continuous control in neurotechnology, with the particular focus on prosthetic restoration of natural limb function, where high dexterity is required for complex movements. Index TermsNeuroprosthetics, robotic hand, decoding, autoregression, EMG, MMG, Gaussian process, propor- tional control, neurotechnology. I. I NTRODUCTION T HE ability to dextrously control our hand is key to our evolution and at the centre of almost all skilled object interaction in daily life [1], [2]. It is thus not sur- prising that loss of hand functionality is severely debilitating and considered more serious than the loss of lower limb function [3]. Currently available commercial devices are typi- cally constrained by limited functionality in terms of degrees of freedom, range of motion, sensory feedback and intuitive control [4]–[6]. Despite the mechanical complexity of the human hand, the bottleneck of modern prostheses lies not only in improving mechatronic design [7], but also in the human- machine interface required for robustly translating the user’s intention into a suitable robotic action. This makes prosthetics cumbersome and limits their perceived effectiveness in daily life – limiting users’ ability to accept the prostheses as part of their body vs using it as a tool [8]. This explains why despite significant improvements of prosthesis acceptance achieved in the last three decades [6], a main reason for their rejection and 1534-4320 © 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, but republication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

Transcript of Gaussian Process Autoregression for Simultaneous ... · cosmetic prostheses. In upper arm...

  • IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017 1785

    Gaussian Process Autoregression forSimultaneous Proportional Multi-Modal

    Prosthetic Control With NaturalHand Kinematics

    Michele Xiloyannis, Constantinos Gavriel, Andreas A. C. Thomik, and A. Aldo Faisal

    Abstract— Matching the dexterity, versatility, and robust-ness of the human hand is still an unachieved goal inbionics, robotics, and neural engineering. A major limita-tion for hand prosthetics lies in the challenges of reliablydecoding user intention from muscle signals when con-trolling complex robotic hands. Most of the commerciallyavailable prosthetic hands use muscle-related signals todecode a finite number of predefined motions and someoffer proportional control of open/close movements of thewhole hand. Here, in contrast, we aim to offer users flexiblecontrol of individual joints of their artificial hand. We pro-pose a novel framework for decoding neural informationthat enables a user to independently control 11 joints of thehand in a continuous manner—much like we control our nat-ural hands. Toward this end, we instructed six able-bodiedsubjects to perform everyday object manipulation taskscombining both dynamic, free movements (e.g., grasping)and isometric force tasks (e.g., squeezing). We recordedthe electromyographic and mechanomyographic activitiesof five extrinsic muscles of the hand in the forearm, whilesimultaneously monitoring 11 joints of hand and fingersusing a sensorized data glove that tracked the joints of thehand. Instead of learning just a direct mapping from currentmuscle activity to intendedhand movement, we formulatedanovel autoregressive approach that combines the context ofprevious hand movements with instantaneousmuscle activ-ity to predict future hand movements. Specifically, we eval-uated a linear vector autoregressive moving average modelwith exogenous inputs and a novel Gaussian process (GP )autoregressive framework to learn the continuous mappingfrom hand joint dynamics and muscle activity to decodeintended hand movement. Our GP approach achieves high

    Manuscript received November 9, 2015; revised June 30, 2016;accepted November 11, 2016. Date of publication August 31, 2017; dateof current version October 23, 2017. This work was supported in part bythe Luxembourg Research Fund under Grant AFR1229297, in part bythe EPSRC, and in part by the eNHANCE Project as part of EuropeanUnion’s Horizon 2020 Research and Innovation Programme underGrant 644000. (Corresponding author: A. Aldo Faisal.)

    M. Xiloyannis, C. Gavriel, and A. A. C. Thomik are with the Brain andBehaviour Laboratory, Department of Bioengineering and Department ofComputing, Imperial College London, London SW7 2AZ, U.K.

    A. A. Faisal is with the Brain and Behaviour Laboratory, Department ofBioengineering and Department of Computing, Imperial College London,London SW7 2AZ, U.K., also with the Data Science Institute, ImperialCollege London, London SW7 2AZ, U.K., and also with the MRCLondon Institute of Medical Sciences, London W12 0NN, U.K. (e-mail:[email protected]).

    Digital Object Identifier 10.1109/TNSRE.2017.2699598

    levels of performance (RMSE of 8°/s and ρ = 0.79). Crucially,we use a small set of sensors that allows us to control alarger set of independently actuated degrees of freedomof a hand. This novel undersensored control is enabledthrough the combination of nonlinear autoregressive con-tinuous mapping between muscle activity and joint angles.The system evaluates the muscle signals in the contextof previous natural hand movements. This enables us toresolve ambiguities in situations, where muscle signalsalone cannot determine the correct action as we evalu-ate the muscle signals in their context of natural handmovements. GP autoregression is a particularly powerfulapproach which makes not only a prediction based onthe context but also represents the associated uncertaintyof its predictions, thus enabling the novel notion of risk-based control in neuroprosthetics. Our results suggest thatGP autoregressive approaches with exogenous inputs lendthemselves for natural, intuitive, and continuous control inneurotechnology, with the particular focus on prostheticrestoration of natural limb function, where high dexterity isrequired for complex movements.

    Index Terms— Neuroprosthetics, robotic hand, decoding,autoregression, EMG, MMG, Gaussian process, propor-tional control, neurotechnology.

    I. INTRODUCTION

    THE ability to dextrously control our hand is key toour evolution and at the centre of almost all skilledobject interaction in daily life [1], [2]. It is thus not sur-prising that loss of hand functionality is severely debilitatingand considered more serious than the loss of lower limbfunction [3]. Currently available commercial devices are typi-cally constrained by limited functionality in terms of degreesof freedom, range of motion, sensory feedback and intuitivecontrol [4]–[6]. Despite the mechanical complexity of thehuman hand, the bottleneck of modern prostheses lies not onlyin improving mechatronic design [7], but also in the human-machine interface required for robustly translating the user’sintention into a suitable robotic action. This makes prostheticscumbersome and limits their perceived effectiveness in dailylife – limiting users’ ability to accept the prostheses as part oftheir body vs using it as a tool [8]. This explains why despitesignificant improvements of prosthesis acceptance achieved inthe last three decades [6], a main reason for their rejection and

    1534-4320 © 2017 IEEE. Translations and content mining are permitted for academic research only. Personal use is also permitted, butrepublication/redistribution requires IEEE permission. See http://www.ieee.org/publications_standards/publications/rights/index.html for more information.

  • 1786 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    low adoption involves dissatisfaction with functionality andcontrol [9]. Low adoption rates are the result of users switchingafter initial use of bionic prosthetics to mechanical or evencosmetic prostheses.

    In upper arm prosthetics one typically uses sensors torecord muscle signals from the residual limb or nerve. Thesesignals are processed and translated into commands for aset of independently controllable actuators. Originally [10],prostheses were controlled using on-off control (or bang-bangcontrol) which is a simple, yet reliable way of actuating aprosthetic device with elementary commands (open vs closehand). With prosthetic mechatronics becoming more sophis-ticated, and enabling us to actuate multiple individual jointsper finger, more sophisticated control approaches were sought.A common compromise was to combine discrete selection ofone hand movement across multiple possible ones, with thena direct proportional control of that motion.

    We believe that the current challenges amputees face inaccepting and retaining use of robotic prostheses can begreatly improved through interfaces that allow continuous,simultaneous and intuitive control of the artificial hand at thelevel of individual finger joints. Continuous prosthetic control,also referred to as proportional control, does not imply thatthe relationship between control input and controller outputis strictly proportional in the mathematical sense, only that itmust be essentially continuous – in fact we explore both linearand nonlinear methods here. To be clear, the term proportionalrelates to the decoders – the forward path of control, we do notwant to confuse it with the term “proportional” in the feedbackcontrol system sense (e.g. the “P” in PID control).

    Continuous control can allow nuanced movement of theprosthetic hand, just as in the human hand and even enablesusers to retain or develop their personal movement flourish.To achieve proportional control, one must move away fromthe typical discrete classification approach and try to mapresidual muscle activity to joint movements in a continuousfashion. This approach has been investigated in wrist kine-matics [11], [12] and hand dynamics [13] in test conditions.The rationale of these studies was to learn the physiolog-ical mapping between EMG activity and joint dynamics/kinematics [14] from a set of training signals. It has beenshown that modelling this mapping in a linear fashion canyield good prediction accuracies, however, more sophisti-cated non-parametric statistical methods were suggested tooutperform linear models (but incur a higher computationalcost) [12]. Despite this progress, inferring the intended move-ment of the hand and fingers directly from these sensors inthe daily setting is still an open challenge. Continuous controleliminates delays of having to switch between a series ofdiscrete grasps or poses. However, switching between multiplediscrete grasps or poses has been helpful to users, as itallows them to explicitly configure the hand to the context.This context functionality has to be replaced in some way –we will demonstrate here a suitable approach that operatesautomatically.

    While myoelectric, i.e. electromyography (EMG) basedcontrol has long been established in prosthetic control for70 years [10], simultaneous proportional myoelectric control

    Fig. 1. Overview between different approaches of time series regressionas applied to the problem of predicting changes to finger joint angles (α)from muscle activity (u). (A) Direct mapping takes input from presentand past muscle activity to decode joint action, (B) Autoregressiverelationship takes input from n past joint actions to decode joint action and(C) the Autoregressive model with Exogenous Inputs integrates presentand past muscle activity and past joint movements to predict joint action.

    systems are far more recent, with studies being reported onlyin the last 5 years. Historically prosthesis control has movedfrom mapping continuous myoelectric signals to discreteon-off control, to more sophisticated multi-way classifiers,to combining discrete classifiers of hand movement modesand their simple linear control to the full joint mapping frommuscle activity to hand movement [15].

    It is worth to structure existing strategies for prostheticcontrol along the nature of the decoding problem, whichwe can structure broadly into 3 approaches (see correspond-ingly Fig. 1.A, B, C). Most simultaneous proportional controlapproaches map some time window of neuromuscular data tothe output of a hand movement. This way of approaching thedecoding problem is exemplified by our overview graphic inFig. 1.A. One is predicting changes to finger joint angles (α)at time t from muscle activity (u) by finding a direct map-ping of the form f (u) = α, where u may involve a timewindow or multiple time windows of muscle activity. Moststudies focus on “learning” such direct mapping functionsfrom data, typically using single [16] or hierarchically organ-ised [17], [18] neural networks. Moreover, dimensionalityreduction and projection mapping methods have been used toharness opportunities arising from high density EMG record-ings with many sensors [19], [20], where low-dimensionalrepresentations of the muscle signals are optimised for max-imal discriminability between movements types. We notethat all these prosthetic control frameworks are normally

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1787

    oversensored, i.e. the ratio of independently controllablerobotic degrees of freedom to sensor readings is typicallysmaller than one.

    While the hand has many more independent degrees ofactuated freedom than current prosthetics, the question arisesif the natural hand uses these degrees of freedom in normaldaily life fully independently? In previous work, we showedthat both the dominant hand [21] (see also [22], [23]) andthe non-dominant hand [24] had joints that were highlycorrelated – when captured in daily life activities. Thus, ourover 20 finger joints are controlled by our brains so as to beeffectively embedded on a lower dimensional manifold: four tofive dimensions (principal components) explained 80%-90% ofthe variability in the movement data. Moreover, we have pre-viously found that within the first 1000 ms of action initiation(from a neutral pose) the hand shape configures itself towardsdifferent tasks that occur many seconds away. This enabledus to reliably predict the action intention from the initialsmall changes in hand pose occurring in the first few hundredmilliseconds of movement [21]. We then demonstrated thata probabilistic latent variable model can be used to inferthe movements of missing limbs from the movements of theresidual limbs to control, for example, a prosthetic half-hand inreal-time [21], [24]. This previous work thus demonstrated thatcorrelation patterns across joints, i.e. the spatial configurationof the hand, is highly structured in natural tasks and can beharnessed for prosthetic control.

    If hand joint configurations are correlated across the hand,what about the temporal correlation structure of naturalhand movements? We previously showed using 1. statisticaltime series segmentation [25], 2. switching motion primitiveapproaches [26], and 3. sparse coding (compressed sensing)methods [24] that the recent past of ongoing hand movementsis highly predictive of the continuation of a movement. More-over, these compact temporal representations of movementcoincided with fundamental grasps and we were able todirectly map these to control robotic hand grasps [27]. Thus,natural hand movements enable prediction of future handmovements from past hand movements alone without needingneuromuscular data. We can mathematically interpret this sec-ond approach as a problem of time series prediction, wherethe history of the movements reflects the inherent dynamics ofthe time series, i.e. the most likely motions of the hand in ourcase. This inherent dynamics is mathematically captured bycontinuous autoregressive models, i.e. f (αt−nτ , . . . , αt−τ ) =αt (see Fig. 1.B). Crucially, the complex coordination ofmultiple joints is simplified, as their correlated movementis the result of the inherent dynamics of the current action.This second approach is simultaneous and proportional bydesign and can map the movement of a few joints to themovements of the whole hand.

    The challenge to the second approach is, of course, thatwhile ongoing hand movements (e.g. reaching for a cup) arein themselves correlated and allow us to predict reasonablywell how a grasp is completed, there will always be eventswithin a series of actions where the user makes a decisionabout a new action that cannot be predicted from the statisticsof the past (e.g. after grabbing a cup one may grab a tea

    pot or a sandwich). This requires explicit control inputs fromthe user to disambiguate the next action. These control inputsare typically muscle activity patterns in the residual limb [10]or result from retargeted residual nerves [28].

    We explore here a combination of the traditionalfirst approach and the unconventional second approach –a third way. We propose a novel framework (see Fig. 2)for information extraction from muscle-related signals whichallows continuous, intuitive and naturalistic control of upperlimb prostheses. Our new approach combines the non-linearspatio-temporal correlation structure of natural hand move-ments with muscle-driven control signals to exploit the bestof both worlds. Our framework aims not only to learnthe relation between muscle activity and hand kinematics,but should also exploit the natural autocorrelation of every-day hand movements to achieve a higher performance thanpure control signal decoding strategies (first approach). Thus,we are proposing implementing this framework using tractablenon-linear probabilistic models of autoregressive processeswith exogenous inputs u (muscle signals) of the nature:f (αt−nτ , . . . , αt−τ , ut−nτ , . . . , ut−τ ) = αt (see Fig. 1.C).

    We advance the hypothesis that, in contrast to previousapplications, with this approach we should be able to usefewer sensors than independently actuated hand joints and yetachieve high decoding performance across all these joints.Specifically we will test if we are able to decode reliably11 finger and thumb joint movements from 5 EMG sensorson the forearm.

    II. METHODS

    A. Experimental Setup

    Six healthy male subjects, aged 23-28, gave writteninformed consent prior to participation in the study. Partici-pants were seated on a chair fitted with a custom elbow restsupport. Their elbow rested on a flat surface, the forearm washorizontal and the hand was positioned so that the axis ofabduction/adduction of the wrist was parallel to the surface.In order to prevent activation of the extensor carpi radialislongus (responsible for the abduction of the wrist) during theexperiment, the wrist was also rested on a customised support.

    Subjects were instructed to grasp and manipulate 7 objects(example hand postures are rendered in Fig. 4), each object12 times during two different sessions involving 6 repetitionseach. The 7 object manipulations were: grasping a ball,grabbing a book, drinking from a bottle, grasping a doorhandle, operating a computer mouse, holding a mobile phoneand using a pen. To allow for comparison with other studies,we used objects and interactions from a recent taxonomy ofhand postures [30].

    We adopted our low-cost system for MMG signal acquisi-tion [29] and merged it with myoelectrical signals recordedusing a commercially available EMG amplifier [BrainVision LLC, Morrisville, NC]. We paired the signal recordedfrom the muscle activity with kinematic measurements offinger movements during everyday tasks. We used this data totrain a linear and a non-parametric regression model capableof continuously predicting the angular velocity of 11 joints of

  • 1788 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    Fig. 2. Illustration of the probabilistic framework (graphical model) used to model the dynamic integration of muscle-activity and hand-joint kinematics.Continuous decoding of muscle signals to predict (prosthetic) hand joints velocities treats the joint velocities as a weakly stationary autoregressiveprocess (joint space). See Tab. II for the details of the system configuration.

    the fingers of the hand given the muscle activity and an initialcondition.

    At the beginning of each trial, the object nameand time stamp were sent to a serial port via theActiChamp amplifier, these were subsequently used for datasegmentation. Subjects were also asked to apply their max-imum extension and flexion force of their fingers at thebeginning and at the end of the experiment. The root meansquare (RMS) value of these segments was used to esti-mate the Maximum Voluntary Contraction (MVC), neces-sary for normalisation of the signal in the pre-processingphase.

    We recorded muscle activity with 5 EMG and 4 MMGchannels positioned at the bellies of muscles (∗ denotes bothEMG and MMG measurements) from the posterior (1: Exten-sor Digitorum Communis∗ (EDC), 2: Extensor Digiti Min-imi∗ (EDM), 3: Extensor Carpi Radialis (ECR)) and anterior(1: Flexor Digitorum Superficialis∗ (FDS), 2: PalmarisLongus∗ (PL)) compartment of the forearm as shown inFig. 3.(a-b). The channel sites were located by palpatingthe respective muscles. The MMG sound chambers werepositioned between the two electrodes of the same EMGchannel (see Fig. 3.(c-f)). A summary of the monitored mus-cles is shown in Table I. Simultaneously, we measured fingermovements using a CyberGlove I [CyberGlove Systems LLC,San Jose, CA]. This is a sensorised glove which measuresthe angle of all joints within the hand from resistive changesin stretch sensitive sensors placed over each joint. The hand

    TABLE IMONITORED MUSCLES

    TABLE IIMONITORED HAND JOINTS

    joints monitored for the purpose of this study are listedin Table II.

    B. Data Processing

    Prior to further analysis, we preprocessed our datain the following fashion using Matlab (MathWorks Inc.,Natick, MA):

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1789

    Fig. 3. Monitored muscles in the anterior and posterior compartment of the forearm and sensor placement details. (a + c) Posterior compartmentof the forearm and positioning of the electrodes. Each MMG microphone was located between the two corresponding EMG electrodes (in a bipolarconfiguration) of the same muscle. Channels on the muscles are numbered as in the text. (b+ d) Anterior compartment of the forearm and positioningof the EMG and MMG channels. Channels on the muscles are numbered as in the text. (e) Printed circuit board for amplification, rectification andfiltering of the MMG signal generated by muscle during contraction. (f) The mechanical signal is transduced by a miniature microphone embodied ina 3D printed resonance chamber). Details of the MMG sensor design are described in [29]. Illustration used in a + b adapted by anonymous authorfrom Wikimedia Commons.

    1) Segmentation of the data of different tasks using thetime stamps. The resulting segments contained data fromdifferent trials of the same task.

    2) High-Pass filtering of the EMG ( fc = 10Hz) and of theMMG signal ( fc = 8Hz) using a 5th order Butterworthfilter to remove movement artefacts.

    3) Low-Pass filtering of the signals ( fc = 150Hz) usinga 5th order Butterworth filter to remove high frequencynoise.

    4) A 50 Hz comb filter to remove power-line interference.5) Normalisation by Maximum Voluntary Contrac-

    tion (MVC). This constrains the input signals for theregressor to the interval [0, 1].

    6) Downsampling of EMG and MMG signals to the Cyber-glove rate, i.e. 138Hz. This procedure was necessary toalign the glove data with the muscle-related signals.

    The data acquired from the CyberGlove were smoothenedusing a first-order Savitzky-Golay filter with a running win-dow of 23 consecutive samples to remove the discontinuitiesinduced by the A/D converter. The window width was chosento lie within the peak of the hand movement autocorrelationfunction width of half maximum (τ̄WHM = 362 ms, see alsoFig. 4). This ensures that smoothing only involved highly auto-correlated samples and is little affecting the overall dynamicalchanges of the system.

    We then differentiated the position data to obtain angularvelocity (α̇) which is statistically more stationary in naturalhand movements, has a more Gaussian distribution and isknown to be closely related to motor commands [32] of eachjoint’s flexion/extension movement.

    Fig. 4. Example hand poses of subject hand configuration duringdesignated dynamic and static object-manipulation tasks involing objectsof different shapes and sizes. Example reconstruction of instantaneoushand pose from top left to bottom right: grasping a ball, grabbing a book,drinking from a bottle, grasping a door handle, operating a computermouse, holding a mobile phone and using a pen. Images created byrendering the Cyberglove finger configuration joint configuration datausing the LibHand library [31].

    C. Generalised Time Series PredictionModel for Prosthetics

    We implemented a model for continuous prediction ofthe angular joint velocities of the flexion/extension joints ofthe hand by expressing the angular velocity of the joints attime t , α̇t , as a function of the angular velocity at time t − τ ,α̇t−τ , and the current muscle-activation vector ut. We are thustreating the joint angular velocity as a wide-sense stationaryautoregressive sequence that is influenced by external factors,namely, in our case, the forearm muscle activation pattern.The angular velocity of the joints at time t can thus be

  • 1790 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    expressed as:

    α̇t = f (α̇t−τ , ut) + � (1)We seek to learn the function f under the assumption ofGaussian distributed noise � ∼ N (0, σn).

    To compare linear and non-linear approaches we canrephrase the goal of decoding hand motion from muscleactivity as a probabilistic inference, as illustrated in Fig. 2.The current hand configuration is modelled as a random vari-able θt , a 11-dimensional state vector of the 11 independentlycontrolled finger joint angles. The variable θt is conditionallydependent on the previous time steps t − τ (where τ isthe sample interval of the system), and the current muscleactivation pattern ut . The activation pattern ut is here a 5 + 4dimensional vector of muscle activity recorded by EMG andMMG sensors on the forearm (muscle space). We interpret thisin our graphical model as the sequence of finger joint motionsbeing caused by muscle activity patterns.

    The time-lag τ plays a fundamental role in determining theaccuracy and the computational demand of the algorithm. Itsvalue is strictly related to the dynamics of the process that weare trying to describe. In order to identify a reasonable range oftime-lags to use in our model, we analysed the autocorrelationfunction of the angular velocity α̇ on the data collected usingthe Cyberglove:

    Rα̇α̇(τ ) =∑

    ∀n∈Nα̇(n)α̇(n − τ ). (2)

    We found an average, over joints and subjects, Width ofHalf-Maximum (WHM) of the autocorrelation function atτ̄WHM = 362 ms, as shown in Fig. 5. To estimate the impactof the time–lag on the model’s performance, we trained aGaussian Process (GP) and a Vector Auto–Regressive MovingAverage model with Exogenous Inputs (VARMAX) with time-lags τ = 87 ms, 175 ms, 350 ms, 500 ms. Both models weretrained and tested using a Leave-One-Out cross-validation onthe 6 tasks shown in Fig. 4.

    D. VARMAX Model

    We used a VARMAX model to learn a linear version of thefunction f in Equation 1. This model makes the assumptionof linearity between the Auto–Regressive (AR) terms, a Mov-ing Average (MA) term describing the contributions of theinnovations �i and the exogenous inputs u:

    α̇t = c + but︸︷︷︸

    EX Inputs

    +�t +p∑

    i=1ϕi α̇t−i

    ︸ ︷︷ ︸AR Contribution

    +q∑

    j=1θi�t−i

    ︸ ︷︷ ︸Innovations

    . (3)

    We used the Dickey-Fuller test [33] to characterise thestationarity of the time-series and verified that the MA andAR parts of the model were invertible and stable, respectively.The parameters � = (c, b, ϕ, θ) were fitted to the training datausing Maximum Likelihood (ML) estimate.

    We performed the forecasting stage by iteratively predictingthe state at a time-lag τ ahead and using it as a new input forthe next iteration. The starting point was set to α̇|t=0 = 0 foreach joint.

    Fig. 5. Human hand movements while performing everyday graspingtasks have a strong spatio-temporal correlation structure as revealed byautocorrelation analysis. The plot shows the grand-average computedover subjects and joints of the hand of the autocorrelation functionRα̇α̇(τ )of angular joint velocities, with one standard deviation σ (grey area)around the mean μ. The mean autocorrelation function shows a width ofhalf maximum at τWHM = 362ms.

    E. Gaussian Process AutoregressionWith Exogenous Inputs

    The VARMAX approach learned a linear approximation tothe function f in Equation 1 and we therefore investigate nexta non-linear version, which may perform better. To this end,we used a Gaussian Process or GP model, which is formallyequivalent to an artificial neural network with infinitely manyneurons in the hidden layer [34]. One can think of a GaussianProcess as defining a distribution over functions: insteadof producing single prediction values of f , the GP modelproduces a full predictive distribution p( f ). Thus, we can notonly choose how to evaluate p( f ), e.g. as mean, mode or peakof the probability distribution of possible outcomes, but alsoobtain an implicitly measure of uncertainty that the modelhas about its own prediction. We can therefore define a lossfunction over the predictive distribution, and use Bayesiandecision theory to perform decoding in an optimal way – so asto minimise the risk of wrongly decoding a hand movement,for example when the signal is ambiguous as to the intendedmotion.

    In GPs, the function f is treated as a latent variable, againunder the assumption of normally distributed noise �, and themodel returns a process, i.e. a distributions over functions ofthe form:

    f (x) ∼ GP(m(x), k(x, x′)) (4)fully defined by a mean m(x) and a covariance func-tion k(x, x′), where x is the input vector, which we defined asx = {α̇t−τ , ut} (see also Fig. 6).

    This Bayesian approach to regression yields to a poste-rior distribution over the non-linear regression function f .We computed the posterior by combining the likelihoodfunction P(Dα̇ | f, θ) (with θ being a vector of hyperpara-meters) of the sensory observations x and our prior over

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1791

    Fig. 6. Probabilistic graphical model of our Gaussian Process decoder.Grey circles indicate observable variables and empty circles representvariables that have to be inferred. In our study x are muscle signals and yjoint angles.

    possible regression functions P( f ). Assuming that the noiseof our observations is independent and identically distributed,the posterior distribution over the regression functions has theform:

    P( f |Dα̇ , θ) ∝ P( f |0, K (X, X|θ))︸ ︷︷ ︸

    Prior

    N∏

    n=1p(yn| fn, θ)

    ︸ ︷︷ ︸Likelihood

    (5)

    where the GP prior has zero mean and a covariance matrixK (X, X|θ)) derived from the covariance function, k(x, x ′|θ).

    We encoded the assumption of linearity in the covariancefunction, using a composite linear kernel with additive noiseof the form:

    k(x, x′|) = σ f exp(

    1

    2

    (x − x′)2

    2

    )

    ︸ ︷︷ ︸Noise

    +D∑

    d=1σ 2l xx

    ︸ ︷︷ ︸Linearity

    (6)

    with D the dimensionality of the input vector and ={σ f , , σl} a vector of parameters which defines the propertiesof the covariance function. The mean function was chosen tobe a constant zero function. Fitting of the hyperparameters was done by minimizing the negative log-marginal likelihoodby means of a conjugate gradient numerical method [35].

    The forecasting stage was done by initialising the state attime zero, α̇|t=0 = 0 and iteratively predicting the state ata time-lag τ ahead, using a value drawn from the returnedGaussian distribution as an input for the next iteration. A com-putational limitation of GP is that their computational demandand memory requirements grow with as O(n3) and O(n2)respectively, being n the number of training cases, mak-ing their actual implementation feasible only for small datasets. Because of the high computational demand of GPs,we used the Fully Independent Training Conditional (FITC)approximation [36] to solve the inference phase. In brief:To overcome the computational limitations we can introducenew latent variables, known as pseudo-inputs, which do notappear in the predictive distribution, but do significantly affectthe goodness of fit. Using these methods, the relation betweenthe training latent function values f and the test-cases f∗ can be

    Fig. 7. Graphical model of the relation between the pseudo-inputs �,the training latent functions values f = [f1, . . . , fn]

    T and the test functionvalue f∗.

    Fig. 8. Direct mapping evaluation: Performance of a direct linear decoderand the GP model in a muscle-to-joint mapping without autoregressivecomponents, i.e. not taking into account history joint movements. Thebar graph shows the average, over joints and subject, of the correlationcoefficient ρ (black bar in left axis) and RMSE (blue bar on rightaxis), between the actual and predicted angular joint velocities. Thisconfiguration was evaluated to allow comparison with previous studiesthat do not use autoregressive approaches (see Table III).

    thus formalised:

    p(f, f∗) =∫

    p(f, f∗|u)p(p)dp ≈∫

    p(f |p)p(f∗|p)p(p)dp.(7)

    This equation is represented in the graphical model inFig. 7. The computational demand of the model now scaleswith O(nm2).

    We chose the number of pseudo-inputs to be N/150,where N is the number of training inputs. The hyperparameters where initialised randomly. K-means clustering of thetraining input space was used to initialise the position ofpseudo-inputs. We then optimised both values by minimisingthe negative log-marginal likelihood.

    F. Evaluation

    Using data-driven methods to machine learn a controlframework from experimental data, e.g. during calibration ofthe prosthetic to the end-user, requires a number of caveats toensure a fair and clean comparison of methods, and to predicthow well a given technology will generalise to individualpatients that are not part of the research and developmentprocess. We highlight here a number of challenges and caveatsthat we encountered in the field and how we addressed them.

  • 1792 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    Fig. 9. Evaluation of the performance of the purely linear (a) and non-linear (b) autoregressive model for varying time-lag τ . These models do notconsider muscle activity, and predict the intended hand movement from the context of past hand dynamics. All plots display the average, over jointsand subject, of the correlation coefficient ρ (black bars, left y-axis) and RMSE (blue bars, right y-axis), between the actual and predicted angular jointvelocities, obtained performing a Leave-One-Out cross-validation over tasks. (a) Performance of the VARMAX model, for varying time-lags τ , withno exogenous inputs. In this case the hand configuration is inferred, at each step, only from its previous state and the muscle activity is ignored. Theperformance is, as expected, very poor (note the scale) as no muscle activity is used. (b) Performance of the GP model in the same configurationfor varying time-lags τ . Error bars show mean ± standard deviation (SD).

    Fig. 10. Comparison of the performance of the linear (black bars) and GP (non-linear, blue bars) (b) autoregressive models that incorporate muscleactivity as exogenous inputs. The VARMAX (linear) and GP (non-linear) model use the combined EMG and MMG signals as exogenous inputs asa function of time-lag τ . All plots display the average, over joints and subject, of the correlation coefficient (a) and RMSE (b) between the actualand predicted angular joint velocities. Obtained performing a Leave-One-Out cross-validation over tasks. The performance is about one order ofmagnitude better in correlation and almost two order of magnitude lower than the purely autoregressive dynamics (see previous figure). No significantdifference was found between the VARMAX and GP (p-value of a two-sample t-test > 0.05). Bars show mean ± standard deviation (SD).

    1) Target Quantities of the Decoder: In prosthetics, a numberof target quantities to have been used to determine what thedecoder should control. This includes the configuration of thehand, i.e. joint angles of the prosthetic, or torques produced bythe motor. The challenge with joint angles is that it is difficultto generalise results obtained from joint angle data usingisometric and dynamic movements of the hand, e.g. EMGpatterns will look very different for isometric contractions (e.g.holding an object) versus an unrestricted closing motion of thehand where there is no resistance. Torque-based approachesare difficult to implement, because it is complicated to measureforce-related data (e.g. end-point forces) in the hand duringnatural tasks. Therefore, we used the angular joint velocity,

    which mechanically speaking captures the impulse of thefingers, and operates both across isometric and dynamic move-ments. This, combined with the autoregressive approach, givesthe system also flexibility to learn whether to tune itself tomore force-like signals (by setting the autoregressive weightswith different signs, so as to approximate differentiationby finite differences) or position-like signals (by setting theautoregressive weight to positive weights, so as to approximateintegration by finite sums).

    2) Separation of Training and Test Data: There are a numberof methods that can be used to evaluate, train and test theperformance of prosthetic controllers that are machine learnedfrom experimental data. Normally, each controller is “fitted”

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1793

    Fig. 11. GP autoregression with exogenous inputs performance onthe thumb (3 joints). Pearson correlation coefficient ρ (black bars andleft axis) and RMSE [°/s] (blue bars and right axis) between actual andpredicted finger movement. Bar show mean ± standard deviation (SD).

    to an individual user based on one part of the (training) datafrom that one user, and then evaluated using another part of the(testing) data from the same user. This is an essential processin any regression or classification setting, otherwise one willrisk overfitting and not be able to observe how well a systemoperates under novel conditions.

    Surprisingly, it is often unclear in the existing literature,how this separation of training and test data is implemented.Three general scenarios are possible with very different con-sequences for prediction quality and performance. Typicallysome form of EMG and hand kinematic data is collectedacross a set of tasks or movements. The first approach is torandomly select a subset of samples from the data to buildup a training and test data set. This is however problematic,as in this case the time-series and the correlations within atask make training and test data dependent. An improvementwould be to divide the data into long chunks of contiguoustime (e.g. the first half and the second half of the recording).Alternatively, the subject performs some set of defined tasks,which are often repeated to increase the amount of datacollected. Therefore, a second approach is used in severalstudies where the same task or movement is repeated multipletimes, and some repetitions are used for training and othersfor testing data. We consider this problematic, because thesedata share the same statistical cause (the same task), andtraining and test data are thus only conditionally independent.Thus, the consequence will be that most likely both correlationmeasures and Root Mean Square Error (RMSE) will appearmuch better than they are. A statistically more sound approachis to randomly select data from a set of tasks designatedtraining tasks, and a different set of tasks designated assource of testing data. We thus do not mix training andtesting across repetitions of the same task. This approachenables us to appropriately judge if a decoder well operatesin unknown conditions without overfitting. This last methodis the one we have applied throughout the results reportedhere.

    3) Cross-Validation: We tested the robustness of our resultsusing cross-validation, which is a model validation techniquefor assessing how the results of our analysis will generalise

    Fig. 12. Prediction performance on test data (correlation ρ betweenactual and predicted data) for our GP autoregressive model with exoge-nous inputs, when training and testing from same task were allowed.Average performance is highlighted as dotted line as average correlationof 0.79. Bar show mean ± standard deviation (SD). This elsewherecommon procedure was not used for our other reported results due to theconcerns of overfitting and reflecting an apparently better performanceon test data than is warranted (see main text for details).

    to other data. We used this method because our goal is pre-diction, and we want to estimate how accurately a predictivemodel will perform in practice. One round of cross-validationinvolves partitioning a sample of data into complementarysubsets, performing the analysis on one subset (called thetraining set), and validating the analysis on the other subset(called the validation set or testing set). To reduce variability,multiple rounds of cross-validation were performed usingdifferent partitions, and the validation results were averagedover the rounds. This enabled us to assess how well thesame control framework performs across all tasks. Moreover,it gave us estimates of the uncertainty we have about the valueperformance measures for a given control framework.

    4) Evaluation of Autoregressive Models That Use Past Datato Predict Future Data: Finally, a key aspect of our approach isto use (non-linear) autoregressive techniques with exogenousinputs. This means that our model relates the current value ofthe angular joint velocities to both past values of the angularjoint velocities and current and past values of the drivingexogenous muscle signals. Evaluating the performance of theautoregressive component can be evaluated in two-differentways with important consequences for the expected perfor-mance. Two modes of evaluation arise because the systemuses past values of angular joint velocities. These can be eitherthe ones the system predicted in the past or the true angularvelocities of the joints as obtained from the training and testingdata. The former method, will use predictions to update predic-tions and thus over time diverge more due to the accumulationof noise in the prediction sequence. The latter method willhave tight error bounds as past inputs will be based on actualdata.

  • 1794 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    TABLE IIIOVERVIEW AND COMPARISON OF PREVIOUS WORK AND RESULTS PRESENTED HERE. ABBREVIATIONS: sEMG SURFACE ELECTROMYGRAPHY,

    iEMG INTRAMUSCULAR ELECTROMYGOGRAPHY, MMG MECHANOMYOGRAPHY. “IDENTICAL TASKS” TRAINING AND TESTING DATAWERE NOT SEPARATED ACROSS TASK TO EVALUATE PERFORMANCE. “DIFFERENT TASKS” TRAINING AND TESTING DATA

    WAS SEPARATED BY TASKS TO EVALUATE PERFORMANCE. “DYN” EVALUATION USED MAINLY DYNAMIC MOTIONS(FREE MOVEMENT, PASSIVE OBJECT HOLDING). “ISO” EVALUATION USED MAINLY ISOMETRIC FORCE TASKS

    (e.g. PRESSING AN OBJECT). “DYN & ISO” EVALUATION USED BOTHDYNAMIC MOTIONS AND ISOMETRIC FORCES

    III. RESULTSWe present in the following section the results of our study

    to evaluate novel ways of decoding intended hand movementsfrom hand kinematics and muscle activity. We measured thedesired changes to finger joint angles (α) as target quantitiesthat we wish to predict from muscle activity (u). We presentthe results following the approaches displayed in Fig. 1: Thedirect mapping approach learns a relationship between muscleactivity to changes in finger joint configuration (see Fig. 1.A)and is equivalent to many existing approaches for continuousregression of hand configuration from muscle signals. We eval-uated the available information in the muscle-activity alone topredict finger movements using both a linear decoder and aGP decoder (see Fig. 8). The GP based approach performsbetter than the linear decoder approach. These direct mappingresults set a benchmark for our autoregressive framework andwe compared these further below with a detailed comparisonwith previous studies in Table III.

    Next, we investigated to which extent hand movements arepredictable by themselves, e.g. one can intuitively predict howa hand motion is going to continue. We captured this inherentpredictability of hand movements mathematically using theautoregressive relationship between past finger joint configu-ration and present finger joint configuration (see Fig. 1.B).We found that human finger movements during daily lifehand use show a clear linear autocorrelation structure in time(see Fig. 5) that extends over several seconds. This modelpredicts future movements based on past finger movements,as it captures the averaged natural dynamics of the hand. Forany autoregressive system we need to determine the degree oflag (τ ) that the system should look into the past to predictthe future. To this end, we run the linear autoregressive model(VARMA model), i.e. VARMAX with no exogenous inputs(ut ∈ ∅) for varying lags (87 ms to 500 ms). These lags werechosen so as to cover the area of support of the empiricalautocorrelation function main peak (Fig. 5). Fig. 9 shows theresults of the model run in the aforementioned configurationin terms of Root Mean Square Error (RMSE) and Pearson’scorrelation coefficient (ρ) between the 11 actual and predictedfinger angular joint velocities. The values were averaged acrossjoints, subjects and trials to reflect the overall performance of

    the models for varying τ . In Fig. 9 the RMSE (right axis) andthe correlation coefficient ρ (left axis) indicate, as expected,a very low performance. The performance in terms of RMSEdecreases for increasing lags, while the correlation betweenpredicted and actual hand movements improves with lags size.The results are similar for both the linear and non-linearmodel. Evaluating the autoregressive approach enables us,however, to set a lower limit to the performance any muscle-driven decoder should achieve.

    A. (Gaussian Process) Autoregression WithExogenous Inputs

    The third approach (see Fig. 1.C) is our proposed frame-work which combines direction mapping and autoregressiveprediction – effectively placing ongoing muscle activity intothe context of ongoing hand movements. This autoregressivemodel with exogenous inputs captures the natural dynam-ics of the hand (autoregressive component) as “perturbed”or driven by ongoing muscle activity (exogenous component).We present in the following the evaluation and characterisationof this approach. In analogy to our purely autoregressivemodel, we measured the lag-dependent prediction perfor-mance of the linear autoregressive model with exogenousinputs and GP based version. Fig. 10 shows the perfor-mance of both the VARMAX and the GP model with a9-dimensional input space, comprising the 9 muscle-signalsfrom the EMG and MMG sensors (ut ∈ R9). Both modelsshow an increase in the prediction accuracy for bigger τ valueseven beyond the correlation length estimated from the data,whilst no significant difference was found between the GPand VARMAX performance.

    The performance of both our models is better than thelinear or GP-based direct mapping by about 30%, and theRMSE is about 50%-70% lower across lags. The performanceis for all lags and for both the VARMAX and the GP modelbetter than the purely autoregressive system, by an order ofmagnitude, in terms of correlation and almost two orders ofmagnitude in terms of RMSE. More importantly, the GP basedapproach produces better correlated results than the linearautoregression. Thus, the linear and GP-based autoregressionwith exogenous inputs performs better than both direct and

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1795

    Fig. 13. Contribution of sensory modalities in the linear autoregressive model with exogenous inputs (a) and the GP framework (b). Both frameworksyield to similar results: the lower-limit to decoder performance of the decoder is set by only the correlation of natural hand movements, as determinedby fitting the VARMA model (no exogenous inputs in the GP model), left-most bar in both figures. Including MMG signals as exogenous inputs, or justEMG signals or both MMG and EMG yields large jumps improvements in performance. A significant difference was found between the case withno-exogenous-inputs model and all other configurations, and between the MMG configuration and the EMG and EMG+MMG. ∗∗ denotes p < 0.001and ∗∗∗ denotes p < 0.0001 of a two-sample t-test with α = 0.05. Bar show mean ± standard deviation (SD).purely autoregressive methods. Both models are able to controlmore independent degrees of freedom in the finger actua-tors (11) than we have sensors to read out muscle activity (9).The ratio of independent Degrees of Freedom (DoF) controlledto sensors is 1.22.

    We investigated the underlying mechanisms of this predic-tion by analysing the optimised values of � = (c, b, ϕ, θ)and = {σ f , , σl } for various lags. For the VARMAXmodel, it was found that with increasing lag τ , the contributionfrom the autoregression term ϕ decreases while the oppositehappens for the moving average term θ . We found similarresults for the GP model with an increasing contribution ofEMG-MMG signals ut , and reduced contribution of previousstate αt−τ . This trend could be explained by the naturalsmoothness of human movements [37], which causes conse-quent values of angular velocity close in time to be highlycorrelated. For small values of τ , the model thus relies nearlyentirely on the previous state.

    Globally, the VARMAX and the GP models did not showsignificant difference for a fixed time-lag (p-value of a two-sample t-test > 0.05), and both methods proved to be stableover time, with no significant change in prediction qualityduring the ≈ 20 mins of the experiment.

    Fig. 14 shows the predicted joint velocities of the hand forthree consecutive ball-grasping tasks, as estimated by the GP,with focus on the index MCP joint during a single repetition.GPs return, for each estimated point, a Gaussian distributionwhose standard deviation can be interpreted as a measure ofthe uncertainty of the predicted value. This feature can be usedfor risk-based control of neuroprosthetic devices (see below).

    The best performing model of our framework is the use thenon-linear GPs to combine muscle activity and autoregressiveinputs of hand posture with large lags. This model produces agood degree of correlation between actual and predicted handvelocities ρ = 0.61 and very low RMSE (8◦/s).

    The GP-based approach is, by design, predicting each indi-vidual finger joint independently of the other joints. Crucially,we find (see Fig. 11) that the thumb’s individual joints arehighly predictable in terms of correlation (ρ = 0.82) andRMSE, which is important given the opposable thumb beingthe most independently controlled finger of the hand [24].

    1) Generalisation Performance: We have to be able to judgehow well a decoder will operate in unknown conditions whenthe user wants to generate a novel set of movements. To assessthis generalisation performance we have separate trainingdata from testing data, this is done to avoid overfitting andobtaining spuriously high performance reports. We found inthe literature that generalisation performance is assessed witha problematic lack of separation between training and testdata (see Methods for details). This occurs when the sametask or movement are repeated multiple times and some repe-titions are used for training and others for testing data. This isproblematic because these data share the same statistical cause(the task or movement), and training and test data are thus onlyconditionally independent. Hence, we pursued a statisticallymore rigorous approach by randomly selecting data from aset of tasks designated training tasks, and a different set oftasks designated as source of testing data. We thus do not mixtraining and testing across repetitions of the same task.

    To illustrate the importance of this distinction we computedthe performance of our best performing GP autoregressivemodel with exogenous inputs. We used two methods to sepa-rate test and training data: 1. training using separate trainingand test tasks (our recommended method), and 2. trainingwhere we allowed data from different repetitions of the sametask to be used for training and testing (see Fig. 12). Theperformance of the second training approach is considerablyhigher (ρ = 0.79 vs ρ = 0.61), the level depending onwhich task’s repetitions were used for training and testing.In general, the test data performance is upward biased by

  • 1796 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    Fig. 14. Towards risk-based control. Muscle signals and decoded jointvelocities of the GP model for three consecutive repetitions of graspingand squeezing a ball. Bottom set of plots shows the 5 EMG and 4 MMGsignals as fraction of maximum voluntary contraction (MVC). Middle set ofplots shows the predictive distribution of the on-line decoded joint angularvelocity set to control 11 independent finger joints. The inset shows theactual (black) versus predicted (blue) values of the thumb MCP joint.We plot in the inset the uncertainty of the system in its belief of theprediction, as the standard deviation of the GP framework’s predictivedistribution (blue area). ∗ denotes muscle signals from MMG sensors.Refer to Tables II and I for muscles and joints acronyms.

    30% when using the second method. These results highlighthow repetitions of specific tasks, here grasping a cylinder or aball, provide considerably upward bias to the test performancevalues. We report the differences in test performance andthe degree of method of separating between training and testdata when comparing our results to previous work in theliterature (as these used this approach, see Tab. III, columntraining/testing) and report the more rigorous method (which

    results in overall lower performance values) throughout ourresults otherwise. In addition, any rigorous approach for eval-uation has to be complemented by cross-validation to estimatethe variability in performance reports. This enables us tocarry out statistical tests to compare different approaches andhighlights for example that performance differences betweenour VARMAX and GP approach that appear when comparingaverage performance values, can actually be small, when fac-toring in the variability across reported performance measures.

    2) Evaluating Multiple Sensor Modalities: In our multi-modalapproach, we combined directly both EMG and MMG data.How important or relevant is each sensor modality for ourreconstruction error? We evaluated the relative contributionof these two exogenous modalities. While the GP approachforms a complex non-linear representation of the multiplemodalities, we can use the linear model to get a senseof the individual contributions of the modalities. Fig. 13ashows the results obtained from the VARMAX model, in termsof the correlation coefficient between the actual and predictedvelocities, as a result of using MMG signals as exogenousinputs (black bar), EMG signals as exogenous inputs (bluebar) and both (red bar). A statistical analysis (two-samplet-test with α = 0.05) revealed a significant improvement fromthe autoregressive model with no exogenous inputs (VARMA)to the ones with exogenous inputs (p

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1797

    Fougner et al. [15] to place our prosthetic control approachinto the field’s context. In the context of this taxonomy,we present here a proportional multi-modal simultaneousmulti-dimensional proportional control framework. Moreover,we introduce novel aspects of risk-based control and under-sensored control. Our control approach is proportional, as auser can map a continuous sensory input interval by varyinghis control input within a corresponding continuous interval.We use a simultaneous multi-dimensional control approach,in that we are controlling multiple independent degrees offreedom (DoF) of the system (individual finger joints) at thesame time. This is in contrast to established approaches thatcombine a classifier that selects which hand movement toexecute together with a mapping function (regression function)that actuates that hand movement in a proportional way. To beclear, controlling opening and closing of the hand as a wholemay affect multiple mechanical DoF of the hand mechanics,but has only one independent degree of freedom in terms ofthe user’s ability to control it.

    We control more independent degrees of freedom than wehave independent sensors (11 actuated joints vs 9 sensors, ratio119 = 1.22), while previous work typically has a ratios well

    below 0.5. Thus, unlike most previous work, our approachenables undersensored control, where we are controlling moreindependent degrees of freedom than we have sensors. Thisstriking result is possible because we exploit the temporalcorrelation structure of natural hand movements to provide acontext for the few muscle signal sources we use. This allowsour framework to disambiguate among possible motions.

    Multi-modal control implies here that we combineseveral sensory modalities together, i.e. joint angle configura-tion of the prosthetic, surface/intramuscular electromyography(s/iEMG) and mechanomyography (MMG). While EMG haslong been established in prosthetic control for 70 years [10],mechanomyography has been only recently used beforefor classification driven prosthetic control approaches [38].We used our microphone-based surface MMG sensors [29]that enable recording muscle activity without need for elec-trical contact. This makes day-long recordings of muscleactivity feasible and with a signal-to-noise ratio comparableto EMG [39]. We reported that MMG recordings are stableenough to perform reliable muscle force prediction [40], [41].Modern powered prostheses typically use surface electromyo-graphy (sEMG) for neuromuscular monitoring and prostheticcontrol. Alternatively, recent studies have proven the efficacyof mechanomyography (MMG), EMG’s mechanical coun-terpart [29], [39], which reflects both motor unit recruit-ment [42] and global motor unit firing rate [43]. WhileEMG has been successfully used for classification of indi-vidual finger movement [44] and for fine actions roboticcontrol [45] in the laboratory environment, prosthetic controlmostly involves binary classification to decide whether toopen or close the hand. Recently, MMG studies have demon-strated similar results [46], suggesting that MMG signalsare valuable alternatives or complements for neuroprostheticcontrol. We evaluate how the combination of MMG and EMGsignals affects performance and find little benefit in includingMMG signals to the EMG signals. However, we showed

    that using MMG signals alone provides a reasonable butlower performance compared to EMG, but EMG signals aloneeclipsed the need for additional MMG signals. Consideringthe MMG is typically factor 10-100 cheaper per channel thanEMG [29], we believe that current MMG sensors are probablybetter suited for discrete control, rather than simultaneous,proportional control of many degrees of freedom. Independentof the specific outcome of MMG performance, the use ofa probabilistic inference model to infer joint kinematics forprosthetic controls has a fundamental conceptual advantage –namely that it allows to combine multiple physical modal-ities (and physical units) of input signals in a principlesmanner. This is because conventional approaches such asneural networks and also our VARMAX method produce amapping between the values (irrespective of units) and sochange in units (e.g. microvolt range for EMG is very differ-ent from centivolt range of MMG) produce scale-dependentchanges that normally have to be carefully normalised tomap across modalities (e.g. when combining joint angle andvoltage inputs). In contrast probabilistic approaches, such asour GP, map values to a probability distribution of latentvariables and thereby establishes relationships between theseprobability distributions. Thus, probabilities are mapped toprobabilities, thereby enabling physical scale-invariance andinvariance across physical modality (physical units).

    A. Comparison to Previous Work

    Placing our results in the context previous work requiresmaking comparisons across very different studies – inabsence of an established publicly accessible benchmark dataset or physical system for evaluating proportional control.We therefore focussed on selected recent work that publisheddetails of their data and their performance evaluation pro-cedure (please refer to Tab. III throughout this section) tocompare our work to.

    Careful selection of the performance evaluation measurefor control algorithms is important, as these use the perfor-mance as the automatic optimisation criterion for the system.We focussed on decoding joint velocities, which are highlyconserved across users [24] and are within users more sta-tionary than other quantities [32], thus promise to gener-alise better to novel users. Using joint velocities as a targetmodality enables us to capture actuator impulse, enabling acompromise between controlling hand pose (useful in dynamictasks, e.g. passively hold an apple) and hand force (useful inisometric force tasks such as opening a bottle). This contrastswith previous work that either only considered hand pose (jointangles) [11], [12], [16]–[18], [25] or a grasp force [13].

    The RMSE we report for GP autoregression is averagedacross all joints, the degree of error between actual andpredicted joint angular velocity is very small 8◦/s. This RMSEis equivalent to the angular velocity of a watch hand that takes45 seconds to complete a full 360◦ circle and comparable tothe natural drift velocity of the resting, passive hand. However,the root mean square error (RMSE) is not necessarily agood measure of performance for a prosthesis control system,because one may confound a low RMSE with a poor abilityto follow the dynamics of the system (see [15] for review,

  • 1798 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    see also [47], [48]). Moreover, as other proportional controlstudies looked at different modalities e.g. force or joint angleit is difficult for us to compare raw RMSE across differentphysical units. Joint angles capture little about the dynamicsof the hand e.g. do not account well for the fact that pose-based control induces transients as the hand moves from oneconfiguration to the intended configuration, which can make itdifficult to control hands fluidly in closed loop. We concludethat we need to compare the instantaneous Pearson correlationcoefficient ρ = √R2 of the predicted and actual signal,which indicates how well the predictions perform and co-varywith the data (see column “Perf”). ρ is unit-less and thusallows a somewhat better comparison across systems. Ourcorrelation coefficient for predicting 11 finger joint movementsis within 10% to 15% of current state-of-the art approachesfocussing on wrist control [12], [16], [18]. We consider fingerdecoding the harder problem as the hand has more degreesof freedom and these often involve more, smaller and deepermuscles in the forearm than those of the wrist and thus havepoorer signal quality.

    Comparison of these performance numbers is further com-plicated by the nature of the performance evaluation (if it ispublished in sufficient detail). We show that there is a 25%boost in correlation measures of test performance dependingon whether one omits separating test and training data so thatdata from the same task (but different repetitions) is usedfor both settings (see column Train/Test, report “identical”)or whether training and testing data is clearly separated acrosstasks (see column Train/Test, report “different”). The latteris statistically the more sound measure to evaluate test dataperformance as it speaks to the ability of a framework togeneralise to novel settings (and potentially novel users) –and is the one measure we applied throughout our analysis.The appropriate separation of test and training data needs tobe paired with a cross-validation approach to estimate theuncertainty in performance measures. All our results presentedhere report variability of performance measures, but we didnot include this variability in Tab. III) as aside from [24] nouncertainty estimates were published in the tabled literature.Finally, another confounding factor that blurs the differencein performance across studies is that previous work focusseson assessing proportional control in either dynamic move-ment tasks, i.e. dynamic, free, unimpeded movement of thehand or passive hold of a light object (see column Task -report “dyn”), or on isometric tasks (pressing a force sensor,e.g. [13]) (see column Task - report “iso”). In contrast, ourset of data combines both dynamic tasks and isometric objectmanipulation tasks that arise from interaction with daily lifeobjects. This approach reflects our need and aim to evaluateperformance across daily life scenarios and the use of jointangular velocity control – it is difficult to extrapolate how wellother methods would generalise across this natural spectrumof tasks. We suggest that focussing on free movement tasksalone or isometric force tasks alone will result in inflatedperformance reports than when evaluating a more naturalisticcombination of the two as we have done here.

    Moreover, there are large differences in the number of inde-pendently controlled degrees of freedom (DoF), as sometimes

    a single variable is decoded (e.g. grasp force) while in othercases multiple-actuators are decoded individually. We focussedon actuating 11 individual finger joints independently. Thesensors used are very different, encompassing surface EMGelectrodes, high-density EMG arrays but also invasive mea-sures of muscle activity (see column “sensors”). Our approachis undersensored, as we control not only a much larger(>3-fold) set of independent DoF than most previousapproaches, but also have a higher ratio degrees of freedomcontrolled independently to sensors (electrodes) ( 115 = 2.2 or119 = 1.22 if we include MMG with the EMG). In contrast,

    previous work had ratios significantly smaller than 1 (seecolumn “DoF/Sensor ratio”). Reference [17] used comparablelevels of undersensored control to our work presented here,however a correlation coefficient was not reported and theRMSE of 15◦ was estimated from their publication, as nosummary RMSE was reported and per joint error was reportedas fraction of (unspecified) joint ranges (to obtain the valuewe assumed 90◦ joint range for all joints (which is a lowvalue) and averaged across the joints). Also their work con-sidered only dynamic free movement tasks without isometriccontractions resulting from object manipulation. Our workcontrols joint velocities (RMSE 8◦/s) and not absolute jointconfiguration as [17], and thus over a period of roughly2 seconds our control error would accumulate to their reportedRMSE of 15◦, however in this time the end-user would havemultiple time steps time to compensate for pose configurationerror. We thus, conclude that when considering the largenumber of simultaneously controlled degrees of freedom andthe degree of undersensored control for fluid full-finger handcontrol, GP autoregression operates twice as well as previouscomparable work.

    Previous approaches use some form of direct-mappingpaired most frequently with neural networks which take intoaccount present (and past) muscle activity, while we developeda probabilistic description of the inference problem (cf. Fig. 2)and used an GP autoregressive with exogenous inputs to solvethe problem. Some work performed direct mapping of signalsafter muscle activity signal feature extraction [11], [13], [18],these features implicitly included time-series element of themuscle signal input. Other work [16], [17] used for exampleexplicitly time-delayed inputs into neural networks to incorpo-rate the history of muscle activities. In contrast here we operatewith the history (i.e. context) of the hand movement itself(autoregression), and thus our approach takes into accountboth past joint kinematic context and muscle activity toperform decoding. Thus our work should be also comparedto [21], [25], which exploit directly the conserved spatio-temporal structure of natural hand kinematics to predict fromthe configuration of fingers 3-5 the configuration of the(missing) thumb and index finger. That work is applied to par-tial hand prosthetics and operates without having to use muscleactivity, while here we reconstruct full hand configurationfrom knowledge of the spatio-temporal structure and muscleactivity, we achieve an over 30% higher correlation coefficienthere than reported before. Proportional control approachesfocussed on different control quantities, such as wrist controland open/closure of the hand (as a whole) or force of grasp

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1799

    (see column “controlled quantity”). In contrast, we enabledhere detailed fluid control of the individual fingers andespecially the thumb, for which we report an average cor-relation coefficient of ρ = 0.72 and RMSE of 7◦/s.

    B. Risk- and Context-Aware Control

    The nervous system and movements are inherentlynoisy [49], [50]. We use here probabilistic Bayesian methodsto process this neurobehavioural data in a principled way tooperate with this uncertainty. We presented a novel frameworkfor prosthetic control, framed in the language of BayesianDecision Theory that reinterpreted the prosthetic decodingproblem as a recursive time series prediction model with exter-nal inputs. We compared both linear approaches and non-linearmethods, based on Gaussian Process (GP) regression. WhileGP regression has not been used for direct mapping problemsbefore, it is generally considered equivalent in terms of per-formance to conventional artificial neural network approaches,which have been extensively used before in direct mappingof prosthetic control. In fact Gaussian Processes are formallyequivalent to a neural network with an infinite number of unitsin the hidden layer [34], [51], [52]. Using GP respressionallows us now to extend the prosthetics taxonomy [15] bythe notion of risk-based control. GP regression provides aprincipled method for us to obtain the decoders uncertaintyin interpreting the user intention. We propose that risk-basedcontrol enables us to use the systems own understanding ofhow uncertain, given the data and the calibration, the decodingof a specific hand-movement can be. This allows the con-troller to make decisions about whether to go ahead with anunmodulated movement of the hand or choose a modulatedmovement (e.g. uses smaller amplitudes or forces to avoiddangerous situations where the user suffers from unintendedactions). These unique benefits of probabilistic inference overstandard neural network based approaches for decoding comeat a cost. GP regression is numerically complex during train-ing (which can be however performed offline) and is alsosomewhat more complex to evaluate than neural network basedmethods. We found that these potential concerns do not forman issue for two reasons. 1. we proposed here the use ofapproximation methods to simply and reduce the complexityof inferring the GP for prosthetic control and 2. constantadvances in embedded processing, have seen GP processesalready successfully deployed in real-time activity recognitionin smartphone-based settings using state-of-the-art low-powerembedded processors (e.g. [53], [54]).

    While better quality and more sensor signals, e.g. fromhigh density electrode arrays, will improve our system’sperformance further, we demonstrated the power of usingthe inherent structure of natural movements with record-ings from a few sensors, to achieve undersensored control,i.e. enabling a larger independent degrees of freedom to becontrolled from a smaller number of sensors. This will becomeincreasingly important as robotic prosthetic hands increase insophistication, enabling not only dextrous control of graspingbut also allowing us to perform in-hand object manipulationtasks: The EthoHand [7], for example, is capable of writing

    a text message on a smart phone or rolling balls inside itspalm [7]. This mechatronic capability is enabled by novelthumb designs that enable complex pointing of the thumb, andhence which requires corresponding decoding of thumb action,as we demonstrated here. We showed that the EthoHand canrender the complex spatio-temporal structure of natural humanhand movements to an unprecedented extent [27] and herewe used the very same spatio-temporal structure to boostdecoding performance by providing context to the muscleactivity recorded.

    We propose that more detailed and higher-level contextinformation will further boost the ability to decode correctlyambiguous and noisy sensor signals. This is enabled bycurrent developments of haptic sensing algorithms (HapticSLAM) that can reconstruct and recognise from finger jointkinematics alone the object shape that is manipulated bythe artificial hand [55], [56], but also come from sensoryfeedback of the hand. We will aim to move beyond low-level kinematics, as used here, to higher order structure ofhand-based actions (e.g. reach, grasp, pull), tasks (e.g. drinkfrom cup) and ultimately overall user intentions (e.g. havingbreakfast) [24]. This high-level information will be eventuallyavailable from smart home environments but could also beinferred directly from the activity dynamics with informationaccessible to a prosthetic hand itself. These developments needto be supported by careful designed performance evaluations,that cleanly separate training and test data, evaluate decodingacross both dynamic and isometric tasks taken from daily life.Ultimately, the best performance measures should be the abil-ity, agility and satisfaction with which an actual end-user canperform complex tasks, as demonstrated in [20]). In conclusionwe demonstrated how statistical machine learning approachescan yield in a purely data-driven way a method that efficientlycombines contextual information with multi-modal data toboost the ability of end-users to control an artificial hand’sfinger better approaching the dexterity of the human hand.

    ACKNOWLEDGMENTThe authors would like to acknowledge an anonymous

    reviewer, Charalambos Konnaris and Giuseppe Zito for helpfulcomments on the manuscript.

    REFERENCES[1] A. Faisal, D. Stout, J. Apel, and B. Bradley, “The manipulative com-

    plexity of lower Paleolithic stone toolmaking,” PLoS One, vol. 5, no. 11,p. e13718, 2010.

    [2] E. Hecht et al., “Acquisition of Paleolithic toolmaking abilities involvesstructural remodeling to inferior frontoparietal regions,” Brain Struct.Function, vol. 220, no. 4, pp. 2315–2331, 2015.

    [3] R. Beasley, “General considerations in managing upper limb ampu-tations,” Orthop. Clin. North Amer., vol. 12, no. 4, pp. 743–749,1981.

    [4] J. Belter and J. Segil, “Mechanical design and performance specificationsof anthropomorphic prosthetic hands: A review,” J. Rehabil. Res.,vol. 50, no. 5, p. 599, 2013.

    [5] E. Biddiss, D. Beaton, and T. Chau, “Consumer design priorities forupper limb prosthetics,” Disab. Rehabil. Assist. Technol., vol. 2, no. 5,pp. 346–357, 2007.

    [6] E. A. Biddiss and T. T. Chau, “Upper limb prosthesis use and abandon-ment: A survey of the last 25 years,” Prosthetics Orthotics Int., vol. 31,no. 3, pp. 236–257, 2007.

    [7] C. Konnaris, C. Gavriel, A. A. C. Thomik, and A. A. Faisal, “Ethohand:A dexterous robotic hand with ball-joint thumb enables complex in-handobject manipulation,” IEEE Biorobotics, vol. 6, pp. 1–6, Sep. 2016.

  • 1800 IEEE TRANSACTIONS ON NEURAL SYSTEMS AND REHABILITATION ENGINEERING, VOL. 25, NO. 10, OCTOBER 2017

    [8] T. R. Makin, F. de Vignemont, and A. A. Faisal, “Engineering embodi-ment: Neurocognitive constraints and requirements as design principlesfor human augmentation technology,” Nature Biomed. Eng., vol. 1, no. 1,p. 0014, 2017.

    [9] K. Østlie, I. M. Lesjø, R. J. Franklin, B. Garfelt, O. H. Skjeldal, andP. Magnus, “Prosthesis rejection in acquired major upper-limb amputees:A population-based survey,” Disab. Rehabil. Assist. Technol., vol. 7,no. 4, pp. 294–303, 2012.

    [10] R. Reiter, “Eine neue elektrokunsthand,” Grenzgebiete der Medizin,vol. 1, no. 4, pp. 133–135, 1948.

    [11] L. H. Smith, T. A. Kuiken, and L. J. Hargrove, “Linear regression usingintramuscular EMG for simultaneous myoelectric control of a wristand hand system,” in Proc. 7th Int. IEEE/EMBS Conf. Neural Eng.,Apr. 2015, pp. 619–622.

    [12] J. M. Hahne et al., “Linear and non-linear regression techniques forsimultaneous and proportional myoelectric control,” IEEE Trans. NeuralSyst. Rehabil. Eng., vol. 22, no. 2, pp. 269–279, Feb. 2014.

    [13] C. Castellini, A. E. Fiorilla, and G. Sandini, “Multi-subject/daily-life activity EMG-based control of mechanical hands,” J. Neuroeng.Rehabil., vol. 6, no. 1, p. 1, 2009.

    [14] D. Farina et al., “The extraction of neural information from the surfaceEMG for the control of upper-limb prostheses: Emerging avenues andchallenges,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 4,pp. 797–809, Jul. 2014.

    [15] A. Fougner, O. Stavdahl, P. J. Kyberd, Y. G. Losier, and P. A. Parker,“Control of upper limb prostheses: Terminology and proportional myo-electric control—A review,” IEEE Trans. Neural Syst. Rehabil. Eng.,vol. 20, no. 5, pp. 663–677, May 2012.

    [16] C. L. Pulliam, J. M. Lambrecht, and R. F. Kirsch, “EMG-based neuralnetwork control of transhumeral prostheses,” J. Rehabil. Res. Develop.,vol. 48, no. 6, p. 739, 2011.

    [17] F. Sebelius, L. Eriksson, C. Balkenius, and T. Laurell, “Myoelec-tric control of a computer animated hand: A new concept basedon the combined use of a tree-structured artificial neural networkand a data glove,” J. Med. Eng. Technol., vol. 30, no. 1, pp. 2–10,2006.

    [18] J. L. Nielsen, S. Holmgaard, N. Jiang, K. B. Englehart, D. Farina,and P. A. Parker, “Simultaneous and proportional force estimation formultifunction myoelectric prostheses using mirrored bilateral training,”IEEE Trans. Biomed. Eng., vol. 58, no. 3, pp. 681–688, Mar. 2011.

    [19] D. Yatsenko, D. McDonnall, and K. S. Guillory, “Simultaneous, propor-tional, multi-axis prosthesis control using multichannel surface EMG,”in Proc. 29th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Sep. 2007,pp. 6133–6136.

    [20] S. Amsuess, P. Goebel, B. Graimann, and D. Farina, “A multi-classproportional myocontrol algorithm for upper limb prosthesis control:Validation in real-life scenarios on amputees,” IEEE Trans. Neural Syst.Rehabil. Eng., vol. 23, no. 5, pp. 827–836, Sep. 2015.

    [21] J. J. Belić and A. A. Faisal, “Decoding of human hand actions to handlemissing limbs in neuroprosthetics,” Frontiers Comput. Neurosci., vol. 9,no. 27, 2015, doi: 10.3389/fncom.2015.00027.

    [22] M. Santello, M. Flanders, and J. F. Soechting, “Postural hand synergiesfor tool use,” J. Neurosci., vol. 18, no. 23, pp. 10105–10115, 1998.

    [23] J. N. Ingram, K. P. Körding, I. S. Howard, and D. M. Wolpert, “Thestatistics of natural hand movements,” Experim. Brain Res., vol. 188,no. 2, pp. 223–236, 2008.

    [24] A. A. C. Thomik, S. Fenske, and A. A. Faisal, “Towards sparsecoding of natural movements for neuroprosthetics and brain-machineinterfaces,” in Proc. IEEE Neural Eng. (NER), vol. 7. Sep. 2015,pp. 938–941.

    [25] A. A. C. Thomik, D. Haber, and A. A. Faisal, “Real-time movementprediction for improved control of neuroprosthetic devices,” in Proc.IEEE Neural Eng. (NER), vol. 6. Sep. 2013, pp. 625–628.

    [26] D. Haber, A. A. C. Thomik, and A. A. Faisal, “Unsupervised time seriessegmentation for high-dimensional body sensor network data streams,”in Proc. IEEE Wearable Implant. Body Sensor Netw. (BSN), vol. 11.Apr. 2014, pp. 121–126.

    [27] C. Konnaris, A. A. C. Thomik, and A. A. Faisal, “Sparse eigenmo-tions derived from daily life kinematics implemented on a dextrousrobotic hand,” in Proc. IEEE Int. Conf. Biomed. Robot. Biomechatron-ics (BioRob), vol. 6. Sep. 2016, pp. 1–6.

    [28] T. A. Kuiken et al., “Targeted muscle reinnervation for real-timemyoelectric control of multifunction artificial arms,” JAMA, vol. 301,no. 6, pp. 619–628, 2009.

    [29] S. Fara, C. S. Vikram, C. Gavriel, and A. Faisal, “Robust, ultralow-cost MMG system with brain-machine-interface applications,” inProc. 6th Int. IEEE/EMBS Conf. Neural Eng. (NER), Nov. 2013,pp. 723–726.

    [30] N. Kamakura, M. Matsuo, H. Ishii, F. Mitsuboshi, and Y. Miura,“Patterns of static prehension in normal hands,” Amer. J. Occupat.Therapy, vol. 34, no. 7, pp. 437–445, 1980.

    [31] M. Šarić, Libhand: A Library for Hand Articulation version 0.9., 2011.[32] E. Todorov and Z. Ghahramani, “Analysis of the synergies under-

    lying complex hand manipulation,” in Proc. 26th Annu. Int.Conf. IEEE Eng. Med. Biol. Soc. (IEMBS), vol. 2. Sep. 2004,pp. 4637–4640.

    [33] D. A. Dickey and W. A. Fuller, “Distribution of the estimators forautoregressive time series with a unit root,” J. Amer. Statist. Assoc.,vol. 74, no. 366a, pp. 427–431, 1979.

    [34] J. Bernardo et al., “Regression and classification using Gaussian processpriors,” in Bayesian Statistics, vol. 6.

    [35] C. Rasmussen and C. Williams, “Gaussian processes for machinelearning,” in Gaussian Processes for Machine Learning. Cambridge,MA, USA: MIT Press, 2006.

    [36] J. Quiñonero-Candela and C. E. Rasmussen, “A unifying view of sparseapproximate Gaussian process regression,” J. Mach. Learn. Res., vol. 6,pp. 1939–1959, Dec. 2005.

    [37] Q.-C. Pham, H. Hicheur, G. Arechavaleta, J.-P. Laumond, andA. Berthoz, “The formation of trajectories during goal-oriented locomo-tion in humans. II. A maximum smoothness model,” Eur. J. Neurosci.,vol. 26, no. 8, pp. 2391–2403, 2007.

    [38] J. Silva, T. Chau, and A. Goldenberg, “MMG-based multisen-sor data fusion for prosthesis control,” in Proc. IEEE 25thAnnu. Int. Conf. Eng. Med. Biol. Soc., vol. 3. Apr. 2003,pp. 2909–2912.

    [39] C. Gavriel and A. A. Faisal, “A comparison of day-long recordingstability and muscle force prediction between BSN-based mechanomyo-graphy and electromyography,” in Proc. 11th Int. BSN Conf, 2014,pp. 69–74.

    [40] S. Fara, C. Gavriel, C. S. Vikram, and A. A. Faisal, “Prediction of armend-point force using multi-channel MMG,” in Proc. 11th Int. Conf.Wearable Implant. Body Sensor Netw., Sep. 2014, pp. 27–32.

    [41] M. Xiloyannis, C. Gavriel, A. A. C. Thomik, and A. A. Faisal, “Dynamicforward prediction for prosthetic hand control by integration of EMG,MMG and kinematic signals,” in Proc. 7th Int. IEEE/EMBS Conf. NeuralEng. (NER), vol. 7. Sep. 2015, pp. 611–614.

    [42] K. Akataki, K. Mita, and M. Watakabe, “Electromyographic andmechanomyographic estimation of motor unit activation strategy involuntary force production,” Electromyogr. Clin. Neurophysiol., vol. 44,no. 8, pp. 489–496, 2004.

    [43] T. W. Beck et al., “Does the frequency content of the sur-face mechanomyographic signal reflect motor unit firing rates? Abrief review,” J. Electromyogr. Kinesiol., vol. 17, no. 1, pp. 1–13,2007.

    [44] F. Tenore, A. Ramos, A. Fahmy, S. Acharya, R. Etienne-Cummings, andN. V. Thakor, “Towards the control of individual fingers of a prosthetichand using surface EMG signals,” in Proc. 29th Annu. Int. EMBS Conf.,Aug. 2007, pp. 6145–6148.

    [45] B. Crawford, K. Miller, P. Shenoy, and R. Rao, “Real-time classificationof electromyographic signals for robotic control,” in Proc. AAAI, 2005,pp. 523–528.

    [46] N. Alves and T. Chau, “Uncovering patterns of forearm muscle activityusing multi-channel mechanomyography,” J. Electromyogr. Kinesiol.,vol. 20, no. 5, pp. 777–786, 2010.

    [47] L. Hargrove, Y. Losier, B. Lock, K. Englehart, and B. Hud-gins, “A real-time pattern recognition based myoelectric controlusability study implemented in a virtual environment,” in Proc.29th Annu. Int. Conf. IEEE Eng. Med. Biol. Soc., Apr. 2007,pp. 4842–4845.

    [48] B. Lock, K. Englehart, and B. Hudgins, “Real-time myoelectric controlin a virtual environment to relate usability vs. accuracy,” in Proc.Myoelectr. Symp., 2005, pp. 1661–1667.

    [49] A. A. Faisal, S. B. Laughlin, and J. A. White, “How reliable is theconnectivity in cortical neural networks?” in Proc. IEEE IJCNN, vol. 2.Sep. 2002, pp. 1661–1667.

    [50] A. A. Faisal, L. P. Selen, and D. M. Wolpert, “Noise in the ner-vous system,” Nature Rev. Neurosci., vol. 9, no. 4, pp. 292–303,2008.

    [51] R. M. Neal, “Bayesian learning for neural networks,” Ph.D. dissertation,Univ. Toronto, Toronto, ON, Canada, 1995.

    [52] D. J. MacKay, “Gaussian processes—A replacement for supervisedneural networks?” Cavendish Lab., Cambridge Univ., Cambridge, U.K.,Tech. Rep., 1997.

    [53] H. Shin, Y. Chon, and H. Cha, “Unsupervised construction of an indoorfloor plan using a smartphone,” IEEE Trans. Syst., Man, Cybern. C,Appl. Rev., vol. 42, no. 6, pp. 889–898, Jun. 2012.

  • XILOYANNIS et al.: GP AUTOREGRESSION FOR SIMULTANEOUS PROPORTIONAL MULTI-MODAL PROSTHETIC CONTROL 1801

    [54] L. Zhang, J. Liu, H. Jiang, and Y. Guan, “SensTrack: Energy-efficientlocation tracking with smartphone sensors,” IEEE Sensors J., vol. 13,no. 10, pp. 3775–3784, Oct. 2013.

    [55] F. M. P. Behbahani, R. Taunton, A. A. C. Thomik, andA. A. Faisal, “Haptic SLAM for context-aware robotic handprosthetics—Simultaneous inference of hand pose and object shapeusing particle filters,” in Proc. IEEE Neural Eng. (NER), vol. 7.Sep. 2015, pp. 719–722.

    [56] F. M. P. Behbahani, G. S. Buxarrais, and A. A. Faisal, “Haptic slam:An ideal observer model for Bayesian inference of object shape andhand pose from contact dynamics,” Eurohaptics, vol. 6, pp. 1–6,Apr. 2016.

    Michele Xiloyannis, photograph and biography not available at the timeof publication.

    Constantinos Gavriel, photograph and biography not available at the timeof publication.

    Andreas A. C. Thomik, photograph and biography not available at thetime of publication.

    A. Aldo Faisal, photograph and biography not available at the time ofpublication.

    /ColorImageDict > /JPEG2000ColorACSImageDict > /JPEG2000ColorImageDict > /AntiAliasGrayImages false /CropGrayImages true /GrayImageMinResolution 150 /GrayImageMinResolutionPolicy /OK /DownsampleGrayImages true /GrayImageDownsampleType /Bicubic /GrayImageResolution 600 /GrayImageDepth -1 /GrayImageMinDownsampleDepth 2 /GrayImageDownsampleThreshold 1.50000 /EncodeGrayImages true /GrayImageFilter /DCTEncode /AutoFilterGrayImages false /GrayImageAutoFilterStrategy /JPEG /GrayACSImageDict > /GrayImageDict > /JPEG2000GrayACSImageDict > /JPEG2000GrayImageDict > /AntiAliasMonoImages false /CropMonoImages true /MonoImageMinResolution 400 /MonoImageMinResolutionPolicy /OK /DownsampleMonoImages true /MonoImageDownsampleType /Bicubic /MonoImageResolution 1200 /MonoImageDepth -1 /MonoImageDownsampleThreshold 1.50000 /EncodeMonoImages true /MonoImageFilter /CCITTFaxEncode /MonoImageDict > /AllowPSXObjects false /CheckCompliance [ /None ] /PDFX1aCheck false /PDFX3Check false /PDFXCompliantPDFOnly false /PDFXNoTrimBoxError true /PDFXTrimBoxToMediaBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXSetBleedBoxToMediaBox true /PDFXBleedBoxToTrimBoxOffset [ 0.00000 0.00000 0.00000 0.00000 ] /PDFXOutputIntentProfile (None) /PDFXOutputConditionIdentifier () /PDFXOutputCondition () /PDFXRegistryName () /PDFXTrapped /False

    /CreateJDFFile false /Description >>> setdistillerparams> setpagedevice