GLIMPSE: Google Glass interface for sensory feedback in myoelectric hand prostheses · 2018. 3....
Transcript of GLIMPSE: Google Glass interface for sensory feedback in myoelectric hand prostheses · 2018. 3....
-
1
GLIMPSE: Google Glass interface for sensory feedback in
myoelectric hand prostheses
Marko Markovic, Hemanth Karnal, Bernhard Graimann, Dario Farina, Strahinja Dosen
We acknowledge financial support by the German Ministry for Education and Research
(BMBF) under the project INOPRO and the European Commission under the MYOSENS
(FP7-PEOPLE-2011-IAPP-286208) project.
M. Markovic, and S. Dosen are with the Institute of Neurorehabilitation Systems, Bernstein
Focus Neurotechnology Göttingen, Bernstein Center for Computational Neuroscience,
University Medical Center Göttingen, Georg-August University,37075 Göttingen,
Germany (email: {marko.markovic, strahinja.dosen}@bccn.uni-goettingen.de).
D. Farina is with the Department of Bioengineering, Imperial College London, SW7 2AZ
London, UK (email: [email protected]).
H. Karnal is with the Georg-August University, 37075 Göttingen, Germany (email:
B. Graimann is with the Department of Translational Research and Knowledge Management,
Otto Bock HealthCare GmbH, 37115 Duderstadt, Germany (email:
Address for correspondence:
* Strahinja Dosen
Institute of Neurorehabilitation Systems
Bernstein Focus Neurotechnology Göttingen
Bernstein Center for Computational Neuroscience
University Medical Center Göttingen, Georg-August University
Von-Siebold-Str. 3, 37075 Göttingen, Germany
Tel: + 49 (0) 551 / 3920408
Fax: + 49 (0) 551 / 3920408
Email: [email protected]
-
2
Abstract
Objective: Providing sensory feedback to the user of the prosthesis is an important challenge. The common
approach is to use tactile stimulation, which is easy to implement but requires training and has limited
information bandwidth. In this study, we propose an alternative approach based on augmented reality.
Approach: We have developed the GLIMPSE, a Google Glass application which connects to the prosthesis via a
Bluetooth interface and renders the prosthesis states (EMG signals, aperture, force and contact) using
augmented reality (see-through display) and sound (bone conduction transducer). The interface was tested in
healthy subjects that used the prosthesis with (FB group) and without (NFB group) feedback during a modified
clothespins test that allowed to vary the difficulty of the task. The outcome measures were the number of
unsuccessful trials, the time to accomplish the task, and the subjective ratings of the relevance of the feedback.
Results: There was no difference in performance between FB and NFB groups in the case of a simple task
(basic, same-color clothespins test), but the feedback significantly improved the performance in a more complex
task (pins of different resistances). Importantly, the GLIMPSE feedback did not increase the time to accomplish
the task. Therefore, the supplemental feedback might be useful in the tasks which are more demanding, and
thereby less likely to benefit from learning and feedforward control. The subjects integrated the supplemental
feedback with the intrinsic sources (vision and muscle proprioception), developing their own idiosyncratic
strategies to accomplish the task.
Significance: The present study demonstrates a novel self-contained, ready-to-deploy, wearable feedback
interface based on widely used and readily available technology. The interface was successfully tested and was
proven to be feasible and functionally beneficial. The GLIMPSE can be used as a practical solution but also as a
general and flexible instrument to investigate closed-loop prosthesis control.
Keywords – augmented reality, closed-loop, upper limb prosthesis, grasping, force control, smart-
devices
-
3
1. Introduction
Closing the loop in upper-limb prosthetics by providing artificial somatosensory feedback to the
user is an important challenge, as emphasized by the users, industry and research community [1]–[3].
The aim is to restore bilateral communication between the brain and its end effector, mimicking the
biological information transmission. Natural somatosensory feedback is instrumental for the motor
control of grasping [4] as well as the exploration of the environment [5]. Despite the present interest
and past research, there is only one recently presented commercially available device conveying the
grasping force information using a single vibration motor [6].
The most common methods to provide feedback rely on sensory substitution [7]. In sensory
substitution, the information that is lost due to amputation is transmitted using a modality different
from that employed naturally (e.g., pressure to vibration). In this approach, the information obtained
from the prosthesis sensors (e.g., grasping force) is coded into stimulation patterns, which are
delivered through electro- or vibro-tactile stimulation to the skin of the residual limb. For example,
the measured grasping force can be proportionally translated into intensity and/or frequency of
stimulation [8]. Of course, in order to exploit this type of feedback, the user first needs to learn to
perceive and decode the elicited tactile sensations. Many interfaces with single and multiple
stimulation channels have been previously tested, transmitting most often grasping force [9]–[13] but
also proprioceptive information (e.g., elbow angle [14], wrist rotation [15], hand aperture size [16],
motion [17]).There are also methods to provide modality-matched feedback through the use of force
applicators [18], [19] or pressure cuffs [20]. Finally, the feedback can be restored using invasive
techniques, by stimulating peripheral nerves [21], [22], [23] or cortical structures using implantable
interfaces [24], [25].
Since it is non-invasive and technically simple to implement, the surface stimulation is still the
most common method. However, this approach is characterized by important drawbacks. First, the
tactile interfaces have a limited bandwidth. Due to the physiological [26], [27] (e.g., forearm receptor
density) and technological [28] (e.g., stimulation selectivity) constraints, only a limited amount of
information can transmitted to the user [29], [28]. Second, the provided feedback can be unintuitive,
since the user is asked to control a prosthesis variable (e.g., grasping force) by relying on a variable of
a different nature (e.g., forearm vibration). This means that the provided information is not used for
online control under normal circumstances. Humans routinely use vision or hand touch and pressure
to control the movements, whereas the vibration frequency or intensity is a novel and unfamiliar
input. Consequently, the sensory substitution relying on the tactile sense implies training and
adaptation, which become longer and more tedious as the feedback complexity increases. On the other
hand, multichannel interfaces with sophisticated coding schemes become increasingly relevant, as
they can accommodate the state of multiple degrees of freedom prostheses, which are flexible
multifunction systems. In this case, an advanced feedback is needed to ensure the unambiguous
information transfer of multiple state variables, as shown in [9]. Finally, the tactile interface needs to
be integrated into the prosthesis, which incurs additional hardware costs; in order for the feedback
interfaces to find their way to the consumers, the prosthesis hardware needs to be redesigned.
On a more general level, since the first tactile feedback-systems were introduced [30], [31], there
is an ongoing debate on the actual role and benefits of artificial feedback. The most important
question is whether and to which extent the feedback improves the prosthesis performance and utility.
The results in the literature are often contradictory. The studies that exclude other feedback sources,
which are inherently present in the prosthesis (e.g., sound or vision), or alter the control paradigm
(e.g., using joystick instead of the myocontrol [32], or use virtual prosthesis [12]), usually report
benefits of the tactile feedback [12], [15], [16], [33]–[37]. On the other hand, studies that evaluated
the tactile feedback using more realistic setups, including real prosthesis and functional activities,
often failed to demonstrate any significant benefits of the approach [10], [38]. In some cases, the
feedback was useful but only in a subset of conditions and subjects [16], [39]. Nevertheless, there is
also evidence of functional gains with tactile feedback in daily living tasks [40], [41].
The goal of the present study was to address the aforementioned drawbacks of the tactile
interfaces and at the same time provide new insights about how different intrinsic and extrinsic
proprioceptive, visual, tactile and audio cues interact and contribute to the closed-loop prosthesis
-
4
control. To this aim, we have implemented and evaluated a novel multi-modal feedback interface that
utilizes augmented-reality (AR) and sound. Specifically, we developed the GLIMPSE (Google Glass
Interface for Myoelectric ProstheSEs) that connects, via Bluetooth (BT), to the Michelangelo Hand
Prosthesis [42] and renders the AR feedback on the embedded optical head-mounted display
(OHMD). The potential benefits of providing visual feedback in the context of prosthetics have been
recognized before [43], [44]. For example, in [43] the bicolor LED placed on the prosthesis thumb
was used to communicate the grip force to the user. In [44], the subjects wore AR glasses and the
information on the hand aperture and orientation was represented by projecting the virtual box shape
(AR) directly in front of the target object. This was a feasibility study and the data processing was
performed on the laptop, with the glasses that were cumbersome and unsuitable for daily application.
Here we present the first fully self-contained and wearable AR feedback system, implementing non-
intrusive visual feedback through a miniature see-through display. The Google Glass was selected for
this implementation since it represents a unique platform that embeds the hardware functionalities
typical of a smart-phone into a compact and ergonomic system. The device includes an AR OHMD
and a bone conduction transducer that can be used to convey intuitive, high-bandwidth visual and
audio feedback, respectively. The GLIMPSE app was designed to be flexible, integrating several
feedback layers and variables that can fit different application scenarios.
The developed system was evaluated experimentally using a realistic setup and clothespins
reallocation test, with two levels of difficulty (same-pin-color and mixed-color reallocation). The
evaluation included objective (time, success rate) as well as subjective outcome measures
(questionnaire). In addition to presenting a radically novel technical solution for the feedback in
prosthetics, the present experiments provide important insights into the general role and benefits of
feedback especially in the context of task learning and execution. The evaluation considered multiple
variables provided through intentional artificial visual feedback as well as the feedback sources
intrinsic to the prosthesis (e.g., motor sound). This was possible due to the high bandwidth of the AR
interface that allowed us to simultaneously communicate abundance of feedback variables (force,
aperture, EMG biofeedback[45] and touch) to the user. This has allowed the subjects to freely select
what was most useful for accomplishing the task. In that sense, the present study can be regarded as
an open-ended subjective exploration of the relevance of the feedback modalities.
The main hypothesis regarding the benefit of feedback was that the supplemental visual
information would be useful only during the more challenging task (mixed color reallocation),
whereas the intrinsically available visual, proprioceptive and audio cues would be sufficient to
perform a simpler task (same-color reallocation). Moreover, regarding the relevance of the feedback
variables, we hypothesized that the EMG feedback will prove to be the most useful to the user. This
was based on our previous research where we introduced EMG feedback [45], [46] as an alternative to
classically used feedback paradigms (e.g., force feedback), demonstrating that the novel method
indeed improved the control of grasping force during routine grasping as well as force steering.
Finally, we also expected that the supplemental visual feedback would require a certain, constant
amount of user attention to be utilized to its full extent. The assumption was that the users would need
to monitor the feedback shown in the glasses while executing the grasp, which could increase the time
required to accomplish the task.
-
5
2. Material and methods
2.1. Overall system architecture
The overall system architecture is depicted in Figure 1. The system consists of two components:
1) Michelangelo left-hand prosthesis with a wrist rotator and two 13E200 dry EMG electrodes with
integrated amplifiers (Otto Bock Healthcare GmbH, Vienna, AT) [47] and 2) Google Glass [48]
(Alphabet Inc., California, USA).
The Michelangelo prosthesis implements commercial state-of-the-art (SoA) two-channel
sequential and proportional myoelectric control, with trigger-based state-switching between three
available functions: palmar grip, lateral grip and wrist rotation. The prosthesis is instrumented with
three position encoders (thumb, fingers, and wrist) and a single force transducer positioned at the base
of the thumb, measuring the hand aperture, hand rotation and grasping force, respectively. The
embedded prosthesis controller samples the sensor data and the processed EMG signals at the
frequency of 100 Hz. The sampled data plus the flag indicating the currently active prosthesis
function are streamed via a proprietary BT communication interface to the Google Glass.
The Google Glass implements standard smartphone components: 700 MAh battery, dual-core
CPU @ 1.2 GHz, 1GB RAM, 16GB storage, 5 Mpix RGB camera, touchpad, dynamic speaker, BT
and Wireless module with an addition of the 800 x 480 Pixels WGA OHMD. The Google Glass
operating system is based on a special version of Android (4.4.2), and it can run apps called
Glassware that are optimized for the device. We developed a custom Glassware App, hereafter called
the GLIMPSE (see Annex I for implementation details). The GLIMPSE communicates with the
Michelangelo prosthesis, receives and decodes the prosthesis sensor data and renders the feedback at
the refresh rate of 25 Hz on the embedded OHMD. Importantly, the user perceives the OHMD as a
25” high-definition screen that is placed eight feet away from him. The system overview, overall user
interface as well as the feedback layout are presented in Figure 1. The rendered feedback has three
different layouts (see section 2.2 and additional materials) through which the user can circle-through
in real-time by using the “swipe left/right” gesture. Therefore, by providing several functionally
different feedback layers, the application can accommodate different usage scenarios and user needs
(e.g., fine, sensitive force control task, or fast, dexterous prosthesis control).
-
6
Figure 1. System overview and GLIMPSE interface. The GLIMPSE consists of three functional
blocks: a) Main Menu which handles the overall App behavior, initializes the BT device scanning and
data logging; b) Device selection menu which lists all available BT devices (filtered by MAC address)
and c) Feedback rendering menu which renders selected feedback layout. The user navigates through
menus via touchpad by using basic finger gestures (scroll, tap, swipe down). Once the BT connection
between the prosthesis and the Glass has been established, the prosthesis sends the sensor data,
sampled at 100 Hz. The data are decoded by the GLIMPSE application and rendered on the embedded
display at the refresh rate of 25 Hz.
-
7
2.2. Feedback layouts
Three feedback layouts addressing potentially different application scenarios were developed; one
layout was used in the present experiment and it is therefore described in the text (see Figure 2), while
the other two solutions are illustrated with the additional material attached to the present manuscript.
The user can view only one layout at a time, but he/she is free to change it at any given point by
performing the “swipe left/right” gesture, as shown in Figure 1c. In addition to the depicted visual
output, the contact (force > threshold) event is communicated to the user via the embedded audio
output device (bone transducer) by playing a high-pitched “tap” sound from the Glass user interface
library.
The layout of interest in the present experiment (Figure 2) is designed for assisting delicate tasks,
i.e., those that include fine force and aperture control. It consists of two horizontal bars, top bar
indicating the current aperture (in green or yellow) and the lower indicating the current EMG (blue)
and force (red). The horizontal bars are divided in six segments via five vertical lines. When the
prosthesis is fully opened, the aperture bar is full and as the prosthesis closes, the bar size decreases
from right to left, reaching zero once the hand is fully closed. When the contact event is detected, the
color of the aperture bar changes from green to yellow in order to indicate to the user that the
prosthesis has grasped an object. The lower horizontal bar displays the current level of EMG activity
as well as the measured grasping force.The EMG from the flexor muscle is rendered as a blue bar
starting from the left, and increasing to the right, while the extensor activity starts from the opposite
direction (right), and increases to the left. The activity from both muscles is therefore displayed using
a single bar. This was necessary in order to be able to visualize all the variables of interest on the
display of a limited size. The measured grip force is indicated using a vertical red line moving from
the left to right as the force increases, and opposite for decreasing. Importantly, the EMG signals from
the prosthesis were low-pass filtered using a first-order Butterworth IIR filter with the 1.5 Hz cutoff
frequency. This decreased the variability of the EMG so that the EMG level (bar) was stable enough
to be perceived and controlled online by the subject. Finally, since the Michelangelo prosthesis can
produce rather powerful grip force of ~100 N, the EMG and force feedback were provided only for
the lower 60% of the respective force/EMG range. The aim was to allow the user to benefit from the
closed-loop control with an increased resolution and controllability, in the operation range where it
matters the most (i.e., low and medium speeds/forces).
This layout implements an intuitive representation of the prosthesis control and operation. The
two EMG signals are the command inputs for the prosthesis, proportionally controlling closing and
force increase, and opening and force decrease, respectively. Therefore, the user sees explicitly and
precisely the control input he/she is sending to the system, and is therefore able to modulate this input
online. As already demonstrated in [45] this type of EMG feedback allows the user to act predictively
and anticipate the outcome of his actions (e.g., the resulting force). In addition to the control inputs,
the feedback depicts the prosthesis outputs (states), i.e., aperture and force. When the prosthesis is in
the grasp function, the activation of the flexor muscles initiates closing of the prosthesis; the stronger
the contraction (the height of the EMG bar), the higher the velocity of closing. Consequently, the
aperture bar decreases (Figure 2a). Once the prosthesis grasps an object, the aperture bar remains at,
approximately, the same level (stiff object) and the red line indicating the generated grasping force
appears (Figure 2b). If after contact the flexor activity is further increased, the prosthesis will tighten
the grip. Effectively, this will be seen as if the EMG bar pushes the line indicating the grasping force
to the right (i.e., stronger EMG activation, stronger force, Figure 2c). On the other hand, if the
extensor muscles are activated, the EMG bar appears on the other side of the layout and increases in
the opposite direction. Again, the EMG bar pushes the force line, but this time to the left, indicating
the force decrease (Figure 2d). Eventually, the hand starts opening and the aperture bar increases; the
stronger the extensor contraction (the height of the EMG bar), the higher is the velocity of opening
(Figure 2e).
-
8
Figure 2. The feedback layout used for the experimental evaluation. It consists of two horizontal bars
indicating current EMG (blue), force (red) and aperture (green or yellow). The snapshots taken during
a) prosthesis closing, b) contact and force generation, c) increase in grasping force, d) extensor
activation and force decrease, and e) hand open. See text for explanation.
2.3. Experimental setup and protocol
Twenty able-bodied subjects (26±3 yrs.) with little or no prior myoelectric control experience
participated in the experiment. The subjects were split in two groups: half of them used GLIMPSE
(feedback group - FB), while the other half used the prosthesis without any additional feedback device
(no feedback group - NFB). The prosthetic hand was attached to a custom-made ergonomic splint and
strapped firmly using Velcro straps to the subjects’ right forearm, so that it was positioned directly
beneath and perpendicular to the subjects’ hand. Due to the space constraints, two EMG electrodes
were placed on the contra-lateral arm, over the finger and wrist flexor and extensor muscles. The
exact position was determined by palpating the contracted muscles. The feedback group had the
Google Glass mounted in such a way that the images rendered on its OHMD appeared,
approximately, in the center of the subjects’ field of view.
Myocontrol was calibrated using the official therapist software AxonSoft (Otto Bock, GmbH).
The calibrated parameters were uploaded to the embedded prosthesis controller. The sensitivity of
myocontrol was adjusted individually for each subject by changing the electrode gains or by adjusting
software thresholds. The GLIMPSE did not require any specific setup, except to be started at the
beginning of the experiment.
A brief training session followed the initial setup in order to ensure that the myoelectric control
performance was at the satisfactory level. If the subject could close/open the hand, increase/decrease
the force and switch between the functions using comfortable muscle contractions, the control was
deemed good, and the session could proceed. The subjects participating in the feedback group were
asked to put on the Google Glass and adjust the display position by rotating the screen. They were
then instructed to start the GLIMPSE, navigate through the menus and connect to the prosthesis. After
the connection was established, they were asked to navigate to the appropriate feedback layout, which
was utilized through all experimental sessions in the feedback condition. The feedback use and its
purpose were explained to all subjects participating in the feedback group. Afterwards, they were
allowed to briefly test the prosthesis and the feedback in order to familiarize with the closed-loop
control (5-10 min).
The subjects were then introduced into the experimental task, which was a modified version of the
clothespin reallocation test (Figure 3a, b, and c). The task involved grasping and relocating
clothespins, which were colored according to the force of the embedded spring resisting the grasp. In
the present experiment, four differently colored clothespins were used (yellow, red, green, and black).
-
9
In addition, the pins were sensorized using a custom-made solution. A small LED was attached to the
pin and connected to the switch placed on the pin handles (Figure 3c). Therefore, when the handles
touched (~70% closed pin), the LED would light up, indicating to the user that the applied force was
too high (object broken). The task for the subject was to grasp the clothespin attached to the
lower/middle horizontal pole using palmar grasp, transport it to the pole immediately above/below,
and release it by fully opening the prosthesis. Importantly, the subjects were required to perform this
task without activating the LED. Therefore, they had to grasp the pin by applying the right amount of
force, which also produced the right amount of hand closing. The force, or equivalently the amount of
hand closing, needed to be high enough to open the clothespin so that it could be easily removed from
the pole, but not excessively high to open the clothespin fully as that would activate the LED. This
prevented the subject to simply use the maximal force to accomplish the task. Instead, he/she needed
to consider the pin resistance and produce the aperture/force within a specific window depending on
the pin color (Table 1). Therefore, the task required a strict control of grasping force. Importantly,
each of the clothespins was individually calibrated so that the size of the force window was
approximately the same across all clothespins. If the subject activated the LED or dropped the pin
during the reallocation, the trial was deemed unsuccessful, and the reallocation task was restarted.
This was done until the subject successfully accomplished the task.
Table 1. Summary of minimal and maximal allowed forces/apertures (i.e., force/aperture windows)
for each of the clothespins used. The force and aperture values are given relative to the prosthesis
maximal grip force (100 N) and clothespins maximal aperture (3.2 cm).
Pin color Min aperture
[%]
Max
aperture [%]
Aperture
window size [%] Min
force [%]
Max
force [%]
Force window
size [%]
Yellow 33 71 38 7 15 8
Red 33 66 33 13 23 10
Green 33 57 24 23 32 9
Black 33 57 24 35 43 8
The experimental protocol (Figure 3d) consisted of five sections, each comprising six blocks. In
each block, the subjects had to reallocate a clothespin four times successfully, as described above and
displayed in Figure 3a, b. This resulted in 24 and 120 successful reallocations per section and per
subject, respectively (i.e., five sections × six blocks × four reallocations). Note that the total number
of trials per block, per section and in total could have been higher, since this also accounts for the
unsuccessful reallocations. At the beginning of each of the first four sections, a pin of a different color
was placed on the starting pole and the subject performed the reallocation task (Figure 3a) with that
pin for six consecutive blocks. The order of the pin colors was randomized across sections. After each
block, a short break of 30 seconds was introduced to prevent fatigue. A longer, 5-min break was
introduced after each section. The first four sections are hereafter denoted as the same-color
clothespin reallocation tasks. In the final, fifth section, four pins, one per color, were placed on the
lower pole (Figure 3b) and the subject was instructed to reallocate each pin to the middle pole (one
block). The order of the pin colors from right to left was randomized across blocks. The fifth section
is hereafter denoted as the mixed-color clothespin reallocation task.
In summary, in the first four sections, the subjects manipulated the pin of a single color.
Therefore, for the successful task accomplishment, they needed to produce repeatedly the force within
the same target window (Table 1). In the last section, however, the task was more challenging, as they
needed to apply different forces for each pin within the block.
Finally, at the end of the experiment, the subjects were asked to fill out a questionnaire. The
questionnaire (see the Annex II) assessed to which extent the subjects relied on the specific feedback
modalities in order to accomplish the task. The subjects were asked to rate (100 points, 5-point
resolution) the variables that were transmitted through the artificial visual feedback using GLIMPSE
as well as the incidental sources of feedback. The NFB group was asked to assess the three
intrinsically available feedback modalities: visual observation of the prosthesis motion, motor sound
-
10
and proprioceptive feedback from own muscles (Annex II, questions 5-7). In addition to the intrinsic
sources, the FB group was asked to evaluate four variables that were provided via GLIMPSE (EMG,
Force, Aperture, Contact Event; Annex II, questions 1-4). Therefore, the questionnaire provided a
detailed insight in how the subjects participating in different experimental groups valued different
information sources, as well as how the extrinsic and intrinsic feedback sources might interact.
Figure 3. The order of reallocations for the same-color (a) and mixed-color (b) clothespin reallocation
tasks; Modified clothespin with a LED (c) and experimental protocol (d). The modified clothespin
reallocation test (a, b) uses horizontal poles and sensorized clothespins equipped with a LED, a
custom-made contact switch and a battery (c). The LED is activated if the handles touch each other,
indicating that the exerted force was too high (object broken). The experimental protocol (d) consists
of 120 successful reallocations split over five tasks and twenty-five blocks. In the same-color task, (a)
a single pin was reallocated from the lower to the middle poles and vice versa, whereas in the mixed-
color condition (b), four pins (one per color) were reallocated successively from the lower to the
middle pole.
2.4. Outcome measures and data analysis
For each experimental block two outcome measures were introduced: 1) the block completion
time (BCT) and 2) the number of unsuccessful reallocations per block (URB). Importantly, if the
subject dropped the pin or activated the LED (unsuccessful reallocation), the timer was paused and
then resumed once the subject restarted the trial. While the BCT measures the speed at which the
subjects finished a single block, the URB counts the within-block failure rate. The total number of
trials that was needed to complete a single block was given by 4+URB. Since the time from the start
of the trial to dropping or “breaking” the pin contributes to the BCT, the two outcome measures were
not completely independent, but they still emphasized different aspects of performance. Furthermore,
as most of the unsuccessful reallocations happened at the beginning of the trial, while the subjects
were trying to stably grasp the pin, the interaction between the BCT and URB was indeed minimal.
The data analysis was performed using MATLAB 2015b (MathWorks, Natick, US-MA). The
outcome measures were analyzed per task (same and mixed-color reallocation) and per experimental
block. The aim was to assess the influence of feedback on the performance in a specific task as well
-
11
as on the overall learning expressed as the change in performance across blocks. For per task analysis,
the outcome measures were computed for each subject in each of the five experimental tasks and the
results were then pooled across all subjects with respect to their feedback group (FB or NFB). The
pooled data was used to calculate the group performance for each task, irrespective of the
experimental block. Similarly, for per block analysis, the outcome measures were computed for each
subject in each of the six experimental blocks and the results were then pooled across all subjects with
respect to their feedback group (FB or NFB). The pooled data was used to calculate the group
performance for each block, irrespective of the experimental task.
Since the questionnaires were presented only once to each subject, the data from them was simply
pooled and compared across the two feedback groups (FB and NFB).
Since the data did not pass the normality test (Lilliefors test), the Friedman test was applied to
assess the statistically significant differences across conditions within the feedback group (FB or
NFB), followed by Tukey’s honestly significant difference test for pairwise comparison. The
Wilcoxon sum-rank test was used for the comparisons between the same conditions across the
feedback groups (FB vs. NFB). The results are reported as median [inter-quartile range (IQR)]. The p-
value of 0.05 was selected as the threshold for statistical significance.
3. Results
In total, 2962 reallocations were performed of which 2400 were successful (20 subjects × 120
successful reallocations). The number of reallocations differed between the two feedback conditions.
The FB group performed less reallocations compared to NFB group (1434 vs. 1528).
The results across experimental blocks are shown in Figure 4a and b. The results demonstrated
that the subjects were learning the task across the session, with the similar rate of learning in both
feedback conditions. There was a significant difference in BCT across blocks for both FB (𝑝𝐹 < 0.01, DoF = 5, χ2 = 16.28) and NFB (𝑝𝐹 < 0.05, DoF = 5, χ
2 = 14.85). More specifically, the BCT gradually
decreased, and it dropped significantly from 31.2s [13s] and 28.6s [6.5s] in the first block to 26.9s
[2.7s] and 24.2s [8.1s] in the last block for FB (p < 0.05) and NFB (p < 0.05), respectively. There was
no significant difference in URB across blocks. Therefore, the subjects became faster in performing
the reallocation while the task failure rate remained similar. Likewise, no statistical difference
between the conditions (NFB vs. FB) for any of the experimental blocks or outcome measures was
found.
The results across experimental tasks are summarized in Figure 4c and d. Within the feedback
conditions, the URB was significantly different across tasks in both FB (𝑝𝐹 < 0.01, DoF = 4, χ2 =
16.18) and NFB (𝑝𝐹 < 0.01, DoF = 4, χ2 = 17.54) and the BCT only in NFB (𝑝𝐹 < 0.05, DoF = 4, χ
2 =
12.56). The post-hoc analysis determined that the yellow clothespin was significantly more difficult
to reallocate successfully (higher URB) compared to the green clothespin in both FB (p < 0.001) and
NFB (p < 0.01). Additionally, in NFB condition, the URB in mixed-color reallocation task was three
times higher from the URB in the green clothespin reallocation task and this difference was
statistically significant (p < 0.05). The BCT was however similar across all pin colors, except between
the red and mixed tasks in NFB condition (p < 0.05). Therefore, the fact that the subjects were more
successful in relocating a pin of a specific color did not significantly affect the time to handle that pin.
The URB for the green clothespin was, independent of the feedback condition, five times lower than
the URB for the yellow clothespin; nevertheless, the BCT for both clothespin colors were not
substantially different.
During the same-color reallocation tasks, there was no statistically significant difference in
performance in FB versus NFB. In the mixed-color clothespin reallocation task however, the artificial
visual feedback via GLIMPSE proved to be useful. The median of URB in FB condition was indeed
twice lower compared to NFB (0.75 [0.16] vs. 1.5[1.33]), and this difference was statistically
significant (p < 0.05). However, the BCT was not significantly different between FB and NFB in any
of the tasks.
Figure 5 shows the questionnaire results for both feedback conditions. Within the feedback
conditions, there was a significant difference in rating for both FB (𝑝𝐹 < 0.01, DoF = 6, χ2 = 20.13)
-
12
and NFB (𝑝𝐹 < 0.01, DoF = 2, χ2 = 11.42). The NFB group rated the contribution of the intrinsic
visual feedback significantly higher than the sound/vibration coming from the prosthesis (87 [22] vs
27[50], p < 0.01). The FB group rated the EMG feedback as the lowest (22 [10]) of all seven factors,
and statistically different with respect to both force and vision with p < 0.01 and p < 0.05,
respectively. When compared across the feedback conditions (FB vs. NFB), the two subject groups
rated the contributions of intrinsic factors similarly, with no statistical differences except for the
intrinsic visual feedback that was rated significantly lower (p < 0.05) in FB (67 [12]) than in NFB (87
[22]) condition.
Figure 4. Summary results for average block completion time (BCT) and average number of
unsuccessful reallocations per block (URB). Boxplots depict the median (line), interquartile range
(box), maximal/minimal values (whiskers) and outliers (crosses).The figures (a) and (b) depict the
results (BCT [a], URB [b]) per experimental block (data pooled across tasks). The figures (c) and (d)
depict the results (BCT [c], URB [d]) per experimental task (data pooled across blocks). A star
denotes the statistically significant differences (*, p < 0.05). Notations: FB – GLIMPSE feedback
group; No FB – no GLIMPSE feedback group.
-
13
Figure 5. Subjective ratings of different feedback sources. Boxplots depict the median (line),
interquartile range (box), maximal/minimal values (whiskers) and outliers (crosses). Feedback
modalities comprise intrinsic sources, available in both FB and NFB group and extrinsic sources,
including the feedback variables provided by the GLIMPSE. The extrinsic modalities were present
only in FB group. A star denotes the statistically significant differences (*, p < 0.05).
4. Discussion
Both users and researchers agree that the development and implementation of feedback is of
relevance for improved embodiment and control of myoelectric prostheses [7], [49]. Nevertheless,
after decades of research (the first feedback system was developed in the early 50s [50]), the
commercial implementation is lacking. Inconclusive and often contradictive research results about
overall feedback function and relevance as well as the practical implementation constraints (i.e.,
required hardware redesign) contributed to this lack of translation. In order to tackle these issues, we
presented a novel, practical solution that utilizes smart-technology. Specifically, we have developed a
GLIMPSE system that runs on a Google Glass and utilizes a multi-modal interface (display, sound) in
order to communicate, in real-time, an abundance of feedback variables (EMG, force, aperture,
contact) from the hand prosthesis back to the user. Importantly, this implementation is a self-
contained solution which does not require any additional hardware or software components making it
a unique and ready-to-deploy feedback system. We have evaluated the novel interface in a clinically
relevant experimental setup based on a modified version of the clothespin reallocation task. The study
addressed objective (time, failure rate), as well as the subjective (intrinsic vs. extrinsic feedback
contributions) factors in two scenarios: with GLIMPSE (FB) and without GLIMPSE (NFB) feedback.
Therefore, the study describes a novel technical solution, but also demonstrates how the novel system
can be used to explore the role of feedback and its relevance. The study provides an important insight
regarding which of the intrinsic and extrinsic feedback variables contributed to the task execution
performance. Additionally, it also considers the effects of training and task learning.
-
14
4.1. The role and benefit of feedback
The total number of reallocations in FB was lower than in NFB, which means that there were less
unsuccessful trials in the former condition. This illustrates that the subjects understood and
successfully utilized the GLIMPSE system. It also provides a first indication that the supplemental
feedback was beneficial for the task execution.
Observed across experimental blocks (Figure 4a, b), the subjects exhibited learning in both FB
and NFB condition. Interestingly, in both conditions they became faster in performing the task (BCT
decreased) but they did not improve the success rate (URB remained similar). Moreover, there were
no substantial differences in the learning curve between the conditions, and therefore the
supplemental visual feedback did not affect the rate of learning. It is likely that the subjects became
faster in operating the prosthesis mostly during the phases of the task that were less critical (e.g., the
transport of the pin from lower to upper bar and opening of the hand) and that did not require the
utilization of feedback.
There was no significant difference in performance between FB and NFB for the same-color
reallocation task (Figure 4c, d first four boxplot pairs). This result can be attributed to the fact that the
clothespins are compliant objects. The subjects were therefore able to exploit an abundance of
incidental cues from the prosthesis and the pin itself (namely visual, see Figure 5). Moreover, the task
was highly stereotyped and repetitive, consisting of 24 reallocations of the same pin. Thereby, the
subjects could quickly learn the control strategy through trial and error, and this could be
accomplished by relying on the incidental feedback sources. Through repeated grasping, the subjects
could determine the level of muscle contraction that would lead the prosthesis to grasp the pin with
the desired force. They would then consistently activate the muscle to that level, resulting in a good
grasp. In essence, they adjusted (recalibrated) the feedforward control specifically to the ongoing task.
This is similar to the tuning of feedforward commands demonstrated in [32], but in the present study
the learning was driven by the rich incidental feedback (deformable pin). This is an important
outcome demonstrating that during simple, repetitive tasks the supplemental feedback however
advanced (as in GLIMPSE) might play a minor role, especially when there are already available
(intrinsic) feedback sources.
However, once the task became more demanding (mixed-color reallocation, Figure 4c, d) the
subjects could not rely solely on the learning and feedforward control, as they had to adjust their
grasping strategy upon each new clothespin reallocation. In this scenario, the GLIMPSE has proved to
be of great value, significantly improving the overall performance. This is an important outcome
demonstrating that supplemental feedback can be beneficial even in the presence of abundant intrinsic
information (unobstructed vision and sound), given that the task is more complex. Note that many
studies investigating closed-loop control in prosthetics block the intrinsic feedback [13], [16], [36],
[51]. In the present experiment, the supplemental feedback provided more information than what was
available from the incidental sources, and the additional information was in this case useful for the
task execution. Importantly, the performance improvement came with no repercussions regarding the
task execution speed which remained the same for both subject groups (BCT; Figure 4c). This means
that the supplemental feedback was not cognitively taxing, likely due to the high bandwidth of the
visual sense as well as to the manner in which the subjects used the GLIMPSE (see below).
In NFB group, the subjects could rely only on the incidental feedback sources (Figure 5). By
observing the prosthesis, the subjects could estimate the prosthesis state, the aperture and force. For
example, the increase in grasping force leads to a deformation of a clothespin. Therefore, the
prosthesis moves and the aperture decreases. This is also supplemented by a motor sound, as it is
activated by a user command. Moreover, the velocity of closing/opening as well as the produced
grasping force are proportional to the subject contraction strength. Therefore, he/she can exploit the
natural proprioceptive feedback from the muscles (the sense of contraction) when controlling the
prosthesis. In NFB condition the subjects relied more on vision and less on audio cues. Once closed,
the gears in the prosthesis generated a distinct sound each time the force was adjusted. These audio
cues could be used as a crude indication of force increase or decrease. However, the subjects could
also perceive the force information by visually observing the prosthesis, as an increase in force was
indicated by the movement of the hand (squeezing a compliant object). These visual cues were likely
-
15
more evident and simpler to exploit compared to the sound. Therefore, in NFB condition the sound
was weighted with lower importance and somewhat discarded as redundant in favor of vision. It
seems that the subjects also considered muscle proprioception, but the results are not conclusive.
Reliance on vision and muscle proprioception allows the learning and transition to an increasing use
of feedforward control during a simple task, as explained above.
An interesting outcome is that the provision of supplemental feedback through GLIMPSE
decreased the importance of intrinsic visual feedback sources. The reason could be that in the
presence of a more precise visual information available through the AR display (explicit force level),
the subjects naturally opted to rely less (i.e., decrease the importance) on the intrinsic visual cues
(amount of squeezing). Furthermore, assessing the state of the prosthesis might have been somewhat
challenging while interacting with the pins. For example, it could have been difficult to correctly
assess the aperture of the prosthesis due to prosthesis orientation (horizontal palm), contact with the
object and the viewpoint of the subject (behind and from above). In contrast to it, the artificial visual
feedback provided by GLIMPSE was clearly represented and thereby easily accessible, and this could
explain the decreased subject rating regarding the role of the intrinsically available visual cues.
Contrary to our initial hypothesis, the subjects discounted the EMG feedback, while the other
supplemental variables were used, in particular the grasping force. The weighting of the individual
feedback variables was subject specific, as the ratings are dispersed in Fig. 5. For example, some
subjects did not rely on the aperture and contact events almost at all (score
-
16
systematically addressed and evaluated the contributions and interactions of different feedback factors
in this context. The study was performed in able-bodied subjects utilizing a real prosthetic hand
attached to their lower arm via a custom-made socket. The employed myoelectric control was simple,
intuitive and very easy to master as the overall results in both test conditions confirm. Therefore, in
the context of the present study and based on our previous experience [45], [52], [53] we would not
expect substantially different results for the amputee population, especially in the case of naïve users.
Experienced users might exhibit more consistent and reliable force control by exploiting the
anticipatory models acquired through a long-term use of the prosthesis [54], especially if they operate
their own prosthesis. For the same reason, they might be better in decoding the prosthesis state from
the incidental feedback sources. Finally, if the amputees (either naïve or experienced) were to use the
GLIMPSE longitudinally (e.g., for several weeks) it could be that the differences between them and
able-bodied subjects emerge due to the different subjective factors such as overall motivation and
determination for utilizing the system.
As stated in Introduction, visual feedback for prosthesis control was tested in one study [43],
where it was implemented by placing the bicolor LED on the prosthesis thumb. The LED was used to
communicate the grip force, using a green (lower force range) and red (higher force range) light with
an intensity modulated proportionally to the measured grasping force. The approach rendered
functional benefits to the user, improving the performance in a virtual egg task. Another study [44]
used AR feedback in 3D to communicate prosthesis preshape, aperture and the states of the semi-
autonomous controller (e.g., selected grasp type). However, the focus of the study was on testing the
proof-of-concept for a novel prosthesis control paradigm. The AR feedback was considered as a
component within this control scheme, and it was therefore of a secondary importance. The system
described in the present study is truly wearable, self-contained solution and it requires no hardware
modifications to the prosthesis (e.g., integrating an LED). The GLIMPSE is multi-modal, flexible,
easy to use, and the experiments demonstrated the functional benefits in a clinically relevant setting.
Compared to conventional feedback interfaces based on tactile devices, the advantage of
GLIMPSE is the high fidelity and information throughput. It would be extremely difficult if not
impossible to communicate such an abundance of information, as in the present study, using electro-
or vibro-tactile implementation. As demonstrated, this was rather straightforward to implement, and
easy to perceive and process by the subjects, when using an advanced technology such as an AR
display. Moreover, the insights from this study put aside our hypotheses that assumes that, in order to
be effective, the visual feedback interfaces need to be continuously attended. As previously stated in
introduction and methods sections, we took great care of placing the AR display in the center of the
subjects’ field of view. Nevertheless, the study demonstrated that this assumption was not correct. As
previously explained, the subjects disregarded the continuous (EMG) feedback not because it was not
useful, but because of the way how they used the AR display. Since they only glimpsed at it from time
to time, it seems that the exact positioning of the AR display was not that relevant. Therefore,
mounting the Glasses in the peripheral vision, in order to make the feedback less intrusive, would
likely have no substantial repercussions for the overall performance. Moreover, the objective
performance measurements (similar BCT in FB and NOFB) suggests that the amount of visual
attention that the subjects invested to properly utilize the AR feedback was truly minimal and had no
effect on the overall task execution speed.
One more, somewhat secondary, but nevertheless important message of the present study is that
we propose a paradigm shift in developing feedback interfaces in prosthetics. Namely, instead of
developing custom-made interfaces that are embedded in the socket [7] or worn by the user [55], we
propose to utilize smart-devices (e.g., smart-phones, smart-glasses, smart-wearables) to transmit the
feedback. These components are flexible and general purpose programmable hardware platforms that
can convey the information from the prosthesis via a range of available embedded modalities (i.e.,
integrated speaker, vibration motor, and display). Instead of developing a dedicated hardware
solution, closing the loop through a smart device requires the development of a simple, practical and
innovative software solutions that can be uploaded to the smart components. Since smart gadgets are
used widely, this would allow the feedback to become available to virtually every prosthetic user at no
additional cost. Such solutions become especially relevant considering that modern prostheses, such
as i-Limb and Michelangelo Hand, integrate general-purpose communication interfaces (e.g.,
Bluetooth), thorough which they can connect to the smart components. In the present study, we
-
17
implemented an interface for the Google Glass, a pilot smart device which is presently not
commercially available. However, there are many alternative models [56] and the market for these
systems is yet to be developed. The presented smart app is based on visual feedback and therefore
cannot be directly translated to other platforms (unless embedding a phone or a smart watch in the
prosthetic socket as shown in [57]). Nevertheless, the present study illustrates the general idea of how
a versatile processing and communication resources in a smart device could be used to develop a
novel feedback solution. However, it should be considered that such a feedback solution requires the
user to wear an additional device to receive the feedback. In some cases, this is less of a problem (a
mobile phone is carried most of the time) but in others it could be a challenge for the applicability of
the device (e.g., some users might not like to wear smart glasses). This is not an issue for a custom-
made interface integrated into the prosthesis.
4.3. Future development
The GLIMPSE is a flexible solution, as different feedback layouts can be developed and switched
online, as demonstrated in the additional material. The present study focuses on one of the layouts.
Nevertheless, daily life tasks impose different requirements, from dexterous manipulation to fine and
delicate grasping, and these could all be supported by dedicated feedback configurations integrated in
GLIMPSE. This aspect will be evaluated in the future studies. The layouts could be changed between
different tasks or even across the phases of the same task. To pick up an egg one could chose a
feedback screen showing only the force with a high resolution, and then, once the grasp is formed,
switch to the screen showing the orientation of the hand to support manipulation.
On a more general level, the GLIMPSE could operate as a hub for controlling several smart
components simultaneously or independently, thereby establishing a prosthetic body area network. In
this way the GLIMPSE concept would evolve into a powerful, and modular feedback system that
could accommodate virtually any application scenario. For example, a smartwatch or a smartphone
can be used in order to communicate various discrete events (e.g., object touched, object slipped,
function-switch, etc.) or continuous variables (e.g., force, aperture, EMG, wrist rotation, etc.) via
integrated vibration motor or sound speaker. In this scenario, it is specifically interesting that general
purpose smart-devices could overtake the functions (e.g., vibro-tactile stimulation) that until now
were usually reserved for specifically designed tactile feedback interfaces. In this scheme, the
GLIMPSE could also provide a complete closed-loop control solution. With several processing cores
available (smart phone, smart glass), the system could integrate pattern recognition/regression
training, adaptation as well as the feedback. Additionally, due to the flexibility of its software and
hardware platform, the GLIMPSE could be used as a general and flexible instrument to further
investigate the properties of sensory feedback, as we have initially demonstrated in this study.
-
18
Appendix I: Application implementation
The Glassware Apps are developed and deployed using Android 4.4.2 SDK (API 19). The
Glassware Apps rely on activities, which are software components providing user interface (UI) cards.
The user interacts with the UI cards to perform a desired action, e.g., dial the phone, take a photo,
send an email, or view a map. The UI cards are divided into three categories Static, Live and
Immersion. Static cards display static information, Live cards display real-time data and Immersion
provide interactive experience.
The GLIMPSE interface is designed to be simple and intuitive for use. The user interacts with the
App through the touchpad located on the right-hand side by using three simple finger gestures:
“tap/click” to select an option or activate a menu, “swipe left/right” to scroll through the menu lists,
and “swipe down” to cancel the ongoing operation or go back. Upon starting the App, the user is
presented with the Main Menu (Figure 1a) which offers three options: 1) Search for devices which
initializes BT device scanning and displays all available prosthetic devices, 2) Toggle data logging,
which becomes available once the connection between the prosthesis and the Glass has been
established and 3) Quit the App. After the list of the available BT devices has been populated, the user
can use the “swipe left/right” gesture in order to scroll through the list and select the desired
prosthesis via a tap gesture (Figure 1b). By tapping on the selected device, the connection between the
Glass and prosthesis is established. The two devices exchange the configuration data packets and the
Michelangelo prosthesis starts streaming the sensor data.
The GLIMPSE consists of two activities (Figure 6) that utilize Live Cards: 1) Main activity and 2)
BtDiscovery activity. The application starts by calling the Main activity that is responsible for
rendering and managing the UI components but also for handling the data transmission, decoding and
logging. The BtDiscovery is launched from the Main activity in order to inquire for the BT devices
that are within the operating range. It performs the automatic MAC address matching in order to
filter-out the BT devices that do not match the typical Michelangelo prosthesis signature. Each time a
new device is detected the BtDiscovery notifies the Main activity in order to refresh the UI. Once the
user selects the desired device, the Main activity starts the Bluetooth Socket Programming (BSP)
interface that is responsible for the Bluetooth connection. Through the BSP, the prosthesis and the
Glass exchange the proprietary information such as the prosthesis firmware version. Based on this
information, the GLIMPSE configures the prosthesis controller to start sending the data packets at the
rate of 100 Hz within a separate Communication thread. The data packets are received and stored in a
temporary byte array. In order to ensure the data integrity, cyclic redundancy check (CRC) is
performed on each incoming data packet. Once the data packets are checked for integrity, they are
processed and decoded into messages. Each message contains an array of normalized sensory
feedback values (in percentages): 2-channel EMG activity, current function (palmar grasp, lateral
grasp, and rotation), grip force, hand aperture, hand rotation, and current hand preshape. In addition to
this, two additional information are extracted and stored in each message: 1) function-switch event:
triggered each time the active prosthesis function is changed and 2) Contact event: triggered each time
the grip force rises above 3% for 30 ms. The messages are then stored in a circular FIFO buffer. The
FIFO message list is updated at rate of 25 Hz, implying that each data packet from the prosthesis
contains approximately 4 messages (since the prosthesis controller is sending messages @ 100 Hz).
Each time the circular FIFO buffer receives new messages it broadcasts the “New messages
available” event. This event notifies the attached listeners, i.e., the Data logging and Feedback
rendering threads that they should handle the incoming data. The Feedback rendering thread extracts
the last reconstructed message and renders it on the UI Live card. The data Logging thread is
sleeping by default and can be toggled by the user at any time during the App execution. Each time
the data logging is toggled on it creates a uniquely named txt file within the internal Glass storage and
continuously records all newly created messages from the circular FIFO buffer.
-
19
Figure 6. GLIMPSE implementation. The application has two activities: Main and BtDiscovery. The
Main activity handles all critical system operation via three threads: Communication, Logging and
Feedback. The BtDiscovery activity is summoned by the user and performs the scanning of available
BT devices with matching MAC address.
-
20
Appendix II: The feedback questionnaire
Please rate how much, in overall, each of the following feedback sources (factors) helped you during
the task execution: (0-Not at All, 100-A Lot).
1) EMG, rendered on the Google Glass Not at All A Lot
25 50 75 100
2) Force, Rendered on the Google Glass Not at All A Lot
25 50 75 100
3) Aperture, rendered on the Google Glass Not at All A Lot
25 50 75 100
4) Contact Event, rendered on the Google Glass Not at All A Lot
25 50 75 100
5) Visual cues from the clothespin/prosthesis Not at All A Lot
25 50 75 100
6) Audio or vibration cues from the prosthesis/socket Not at All A Lot
25 50 75 100
7) The proprioceptive feedback (i.e., your own sense of effort, coming from the muscles, skin) Not at All A Lot
25 50 75 100
-
21
References
[1] B. Peerdeman, D. Boere, H. Witteveen, R. Huis in `tVeld, H. Hermens, S. Stramigioli, H.
Rietman, P. Veltink, S. Misra, R. H. in ’t Veld, H. Hermens, S. Stramigioli, H. Rietman, P.
Veltink, and S. Misra, “Myoelectric forearm prostheses: State of the art from a user-centered
perspective,” J. Rehabil. Res. Dev., vol. 48, no. 6, pp. 719–37, Jan. 2011.
[2] C. Pylatiuk, S. Schulz, and L. Döderlein, “Results of an Internet survey of myoelectric
prosthetic hand users,” Prosthet. Orthot. Int., vol. 31, no. 4, pp. 362–370, Jan. 2007.
[3] E. Biddiss, D. Beaton, and T. Chau, “Consumer design priorities for upper limb prosthetics.,”
Disabil. Rehabil. Assist. Technol., vol. 2, no. 6, pp. 346–57, Nov. 2007.
[4] R. S. Johansson and K. J. Cole, “Sensory-motor coordination during grasping and
manipulative actions.,” Curr. Opin. Neurobiol., vol. 2, no. 6, pp. 815–823, 1992.
[5] T. Callier, H. P. Saal, E. C. Davis-Berg, and S. J. Bensmaia, “Kinematics of unconstrained
tactile texture exploration.,” J. Neurophysiol., vol. 113, no. 7, pp. 3013–20, Apr. 2015.
[6] “Vincent Systems GmbH, VINCENTevolution 2.” [Online]. Available:
http://vincentsystems.de/en/prosthetics/vincent-evolution-2/. [Accessed: 20-May-2015].
[7] C. Antfolk, M. D’Alonzo, B. Rosén, G. Lundborg, F. Sebelius, and C. Cipriani, “Sensory
feedback in upper limb prosthetics.,” Expert Rev. Med. Devices, vol. 10, no. 1, pp. 45–54,
2013.
[8] A. Y. Szeto and F. A. Saunders, “Electrocutaneous stimulation for sensory communication in
rehabilitation engineering.,” IEEE Trans. Biomed. Eng., vol. 29, no. 4, pp. 300–8, Apr. 1982.
[9] S. Dosen, M. Markovic, M. Strbac, M. Perovic, V. Kojic, G. Bijelic, T. Keller, and D. Farina,
“Multichannel Electrotactile Feedback with Spatial and Mixed Coding for Closed-Loop
Control of Grasping Force in Hand Prostheses,” IEEE Trans. Neural Syst. Rehabil. Eng., pp.
1–1, 2016.
[10] A. Chatterjee, P. Chaubey, J. Martin, and N. Thakor, “Testing a Prosthetic Haptic Feedback
Simulator With an Interactive Force Matching Task,” JPO J. Prosthetics Orthot., vol. 20, no.
2, pp. 27–34, 2008.
[11] A. Ninu, S. Dosen, S. Muceli, F. Rattay, H. Dietl, and D. Farina, “Closed-loop control of
grasping with a myoelectric hand prosthesis: Which are the relevant feedback variables for
force control?,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 22, no. 5, pp. 1041–1052, Sep.
2014.
[12] N. Jorgovanovic, S. Dosen, D. J. Djozic, G. Krajoski, and D. Farina, “Virtual grasping:
Closed-loop force control using electrotactile feedback,” Comput. Math. Methods Med., vol.
2014, 2014.
[13] H. J. B. Witteveen, J. S. Rietman, and P. H. Veltink, “Grasping force and slip feedback
through vibrotactile stimulation to be used in myoelectric forearm prostheses,” in Proceedings
of the Annual International Conference of the IEEE Engineering in Medicine and Biology
Society, EMBS, 2012, pp. 2969–2972.
[14] R. Mann and S. Reimers, “Kinesthetic Sensing for the EMG Controlled "Boston
Arm",” IEEE Trans. Man Mach. Syst., vol. 11, no. 1, pp. 110–115, Mar. 1970.
[15] A. Erwin and F. C. Sup, “A Haptic Feedback Scheme to Accurately Position a Virtual Wrist
Prosthesis Using a Three-Node Tactor Array.,” PLoS One, vol. 10, no. 8, 2015.
[16] H. J. B. Witteveen, H. S. Rietman, and P. H. Veltink, “Vibrotactile grasping force and hand
aperture feedback for myoelectric forearm prosthesis users.,” Prosthet. Orthot. Int., vol. 39,
no. 3, pp. 204–12, Jun. 2015.
-
22
[17] A. Blank, A. M. Okamura, and K. J. Kuchenbecker, “Identifying the role of proprioception in
upper-limb prosthesis control,” ACM Trans. Appl. Percept., vol. 7, no. 3, pp. 1–23, Jun. 2010.
[18] K. Keehoon, J. E. Colgate, J. J. Santos-Munne, A. Makhlin, and M. A. Peshkin, “On the
Design of Miniature Haptic Devices for Upper Extremity Prosthetics,” IEEE/ASME Trans.
Mechatronics, vol. 15, no. 1, pp. 27–39, Feb. 2010.
[19] R. S. Armiger, F. V. Tenore, K. D. Katyal, M. S. Johannes, A. Makhlin, M. L. Natter, J. E.
Colgate, S. J. Bensmaia, and R. J. Vogelstein, “Enabling closed-loop control of the Modular
Prosthetic Limb through haptic feedback,” Johns Hopkins APL Tech. Dig. (Applied Phys. Lab.,
vol. 31, no. 4, pp. 345–353, 2013.
[20] S. Dosen, A. Ninu, T. Yakimovich, H. Dietl, and D. Farina, “A Novel Method to Generate
Amplitude-Frequency Modulated Vibrotactile Stimulation,” IEEE Trans. Haptics, vol. 9, no.
1, pp. 3–12, Jan. 2016.
[21] S. Raspopovic, M. Capogrosso, F. M. Petrini, M. Bonizzato, J. Rigosa, G. Di Pino, J.
Carpaneto, M. Controzzi, T. Boretius, E. Fernandez, G. Granata, C. M. Oddo, L. Citi, A. L.
Ciancio, C. Cipriani, M. C. Carrozza, W. Jensen, E. Guglielmelli, T. Stieglitz, P. M. Rossini,
and S. Micera, “Restoring natural sensory feedback in real-time bidirectional hand
prostheses.,” Sci. Transl. Med., vol. 6, no. 222, p. 222ra19, Feb. 2014.
[22] T. S. Davis, H. A. C. Wark, D. T. Hutchinson, D. J. Warren, K. O’Neill, T. Scheinblum, G. A.
Clark, R. A. Normann, and B. Greger, “Restoring motor control and sensory feedback in
people with upper extremity amputations using arrays of 96 microelectrodes implanted in the
median and ulnar nerves.,” J. Neural Eng., vol. 13, no. 3, p. 36001, Jun. 2016.
[23] G. S. Dhillon and K. W. Horch, “Direct neural sensory feedback and control of a prosthetic
arm.,” IEEE Trans. Neural Syst. Rehabil. Eng., vol. 13, no. 4, pp. 468–72, Dec. 2005.
[24] B. M. London, L. R. Jordan, C. R. Jackson, and L. E. Miller, “Electrical Stimulation of the
Proprioceptive Cortex (Area 3a) Used to Instruct a Behaving Monkey,” IEEE Trans. Neural
Syst. Rehabil. Eng., vol. 16, no. 1, pp. 32–36, Feb. 2008.
[25] J. Cronin, J. Wu, K. Collins, D. Sarma, R. Rao, J. Ojemann, and J. Olson, “Task-Specific
Somatosensory Feedback via Cortical Stimulation in Humans,” IEEE Trans. Haptics, pp. 1–1,
2016.
[26] I. Darian-Smith, “The Sense of Touch: Performance and Peripheral Neural Processes,” in
Handbook of Physiology, I. Darian-Smith, Ed. American Physiological Society, Bethesda,
MD., 1984, pp. 739–788.
[27] A. B. Vallbo and R. S. Johansson, “Properties of cutaneous mechanoreceptors in the human
hand related to touch sensation.,” Hum. Neurobiol., vol. 3, no. 1, pp. 3–14, 1984.
[28] S. Lemling, “Somatosensory Feedback in Prosthetics: Psychometry and Closed-loop Control
using Vibro- and Electro-tactile Stimulation,” Universität Bielefeld, 2015.
[29] L. P. Paredes, S. Dosen, F. Rattay, B. Graimann, and D. Farina, “The impact of the stimulation
frequency on closed-loop control with electrotactile feedback,” J. Neuroeng. Rehabil., vol. 12,
no. 1, p. 35, Dec. 2015.
[30] R. W. Mann, “Kinesthetic Sensing for the EMG Controlled ‘Boston Arm,’” Man-Machine
Syst. IEEE Trans., vol. 11, no. 1, pp. 110–115, Mar. 1970.
[31] R. E. Prior, J. Lyman, P. a Case, and C. M. Scott, “Supplemental sensory feedback for the
VA/NU myoelectric hand. Background and preliminary designs.,” Bull. Prosthet. Res., vol.
101, no. 134, pp. 170–191, 1976.
[32] S. Dosen, M. Markovic, N. Wille, M. Henkel, M. Koppe, A. Ninu, C. Fr??mmel, and D.
Farina, “Building an internal model of a myoelectric prosthesis via closed-loop control for
-
23
consistent and routine grasping,” Exp. Brain Res., vol. 233, no. 6, pp. 1855–1865, Jun. 2015.
[33] H. Xu, D. Zhang, J. Huegel, W. Xu, and X. Zhu, “Effects of Different Tactile Feedback on
Myoelectric Closed-loop Control for Grasping based on Electrotactile Stimulation,” IEEE
Trans. Neural Syst. Rehabil. Eng., vol. PP, no. 99, pp. 1–1, 2015.
[34] J. Walker, A. Blank, P. Shewokis, and M. O’Malley, “Tactile Feedback of Object Slip
Facilitates Virtual Object Manipulation,” IEEE Trans. Haptics, vol. VV, no. c, pp. 1–1, Oct.
2015.
[35] C. E. Stepp, Q. An, and Y. Matsuoka, “Repeated training with augmentative vibrotactile
feedback increases object manipulation performance,” PLoS One, vol. 7, no. 2, p. e32743,
2012.
[36] D. D. Damian, A. H. Arita, H. Martinez, and R. Pfeifer, “Slip Speed Feedback for Grip Force
Control,” IEEE Trans. Biomed. Eng., vol. 59, no. 8, pp. 2200–2210, Aug. 2012.
[37] J. Gonzalez, H. Soma, M. Sekine, and W. Yu, “Psycho-physiological assessment of a
prosthetic hand sensory feedback system based on an auditory display: a preliminary study.,”
J. Neuroeng. Rehabil., vol. 9, no. 1, p. 33, Jan. 2012.
[38] J. D. Brown, A. Paek, M. Syed, M. K. O’Malley, P. A. Shewokis, J. L. Contreras-Vidal, A. J.
Davis, and R. B. Gillespie, “An exploration of grip force regulation with a low-impedance
myoelectric prosthesis featuring referred haptic feedback.,” J. Neuroeng. Rehabil., vol. 12, p.
104, 2015.
[39] H. J. B. Witteveen, L. de Rond, J. S. Rietman, and P. H. Veltink, “Hand-opening feedback for
myoelectric forearm prostheses: performance in virtual grasping tasks influenced by different
levels of distraction.,” J. Rehabil. Res. Dev., vol. 49, no. 10, pp. 1517–26, 2012.
[40] F. Clemente, M. D’Alonzo, M. Controzzi, B. Edin, and C. Cipriani, “Non-invasive, temporally
discrete feedback of object contact and release improves grasp control of closed-loop
myoelectric transradial prostheses,” IEEE Trans. Neural Syst. Rehabil. Eng., pp. 1–1, Nov.
2015.
[41] M. Zafar and C. L. Van Doren, “Effectiveness of supplemental grasp-force feedback in the
presence of vision.,” Med. Biol. Eng. Comput., vol. 38, no. 3, pp. 267–274, May 2000.
[42] “Otto Bock HealthCare GmbH, Michelangelo®.” [Online]. Available:
http://www.ottobock.com/cps/rde/xchg/ob_com_en/hs.xsl/49464.html. [Accessed: 09-Feb-
2015].
[43] E. D. Engeberg and S. Meek, “Enhanced visual feedback for slip prevention with a prosthetic
hand.,” Prosthet. Orthot. Int., vol. 36, no. 4, pp. 423–9, Dec. 2012.
[44] M. Markovic, S. Dosen, C. Cipriani, D. Popovic, and D. Farina, “Stereovision and augmented
reality for closed-loop control of grasping in hand prostheses.,” J. Neural Eng., vol. 11, no. 4,
p. 46001, Aug. 2014.
[45] S. Dosen, M. Markovic, K. Somer, B. Graimann, and D. Farina, “EMG Biofeedback for online
predictive control of grasping force in a myoelectric prosthesis.,” J. Neuroeng. Rehabil., vol.
12, no. 1, p. 55, Dec. 2015.
[46] Schweisfurth, Markovic, Dosen, Teich, and Farina, “Electrotactile EMG feedback improves
the control of prosthesis grasping force, in review,” J. Neural Eng., 2015.
[47] “Otto Bock Michelangelo Hand,” 2014. .
[48] “Google Glass, Google Inc.” [Online]. Available:
https://developers.google.com/glass/distribute/glass-at-work. [Accessed: 25-Jul-2016].
[49] S. Lewis, M. F. Russold, H. Dietl, and E. Kaniusas, “Satisfaction of Prosthesis Users with
-
24
Electrical Hand Prostheses and their Sugggested Improvements.,” Biomed. Tech. (Berl)., Sep.
2013.
[50] J. Gonzelman, H. Ellis, and O. Clayton, “Prosthetic device sensory attachment,” US2656545
A, 27-Oct-1953.
[51] I. Saunders and S. Vijayakumar, “The role of feed-forward and feedback processes for closed-
loop prosthesis control.,” J. Neuroeng. Rehabil., vol. 8, no. 1, p. 60, Jan. 2011.
[52] M. A. Schweisfurth, M. Markovic, S. Dosen, F. Teich, B. Graimann, and D. Farina,
“Electrotactile EMG feedback improves the control of prosthesis grasping force,” J. Neural
Eng., vol. 13, no. 5, p. 56010, Oct. 2016.
[53] M. Markovic, S. Dosen, D. Popovic, B. Graimann, and D. Farina, “Sensor fusion and
computer vision for context-aware control of a multi degree-of-freedom prosthesis.,” J. Neural
Eng., vol. 12, no. 6, p. 66022, Nov. 2015.
[54] P. S. Lum, I. Black, R. J. Holley, J. Barth, and A. W. Dromerick, “Internal models of upper
limb prosthesis users when grasping and lifting a fragile object with their prosthetic limb.,”
Exp. brain Res., Aug. 2014.
[55] F. Clemente, S. Dosen, L. Lonini, M. Markovic, D. Farina, and C. Cipriani, “Humans Can
Integrate Augmented Reality Feedback in Their Sensorimotor Control of a Robotic Hand,”
IEEE Transactions on Human-Machine Systems, pp. 1–7, 2016.
[56] “The best smartglasses 2017: Snap, Vuzix, ODG, Sony & more.” [Online]. Available:
https://www.wareable.com/headgear/the-best-smartglasses-google-glass-and-the-rest.
[Accessed: 26-Jan-2017].
[57] “Man embeds smartphone into prosthetic arm - CNET.” [Online]. Available:
https://www.cnet.com/news/man-embeds-smartphone-into-prosthetic-arm/. [Accessed: 26-Jan-
2017].