Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

14
HUMAN-ROBOT INTERACTION Copyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement of Science. No claim to original U.S. Government Works Noninvasive neuroimaging enhances continuous neural tracking for robotic device control B. J. Edelman 1 *, J. Meng 2, D. Suma 2, C. Zurn 1 , E. Nagarajan 3 , B. S. Baxter 1, C. C. Cline , B. He 2|| Brain-computer interfaces (BCIs) using signals acquired with intracortical implants have achieved successful high-dimensional robotic device control useful for completing daily tasks. However, the substantial amount of medical and surgical expertise required to correctly implant and operate these systems greatly limits their use beyond a few clinical cases. A noninvasive counterpart requiring less intervention that can provide high- quality control would profoundly improve the integration of BCIs into the clinical and home setting. Here, we present and validate a noninvasive framework using electroencephalography (EEG) to achieve the neural con- trol of a robotic device for continuous random target tracking. This framework addresses and improves upon both the brainand computercomponents by increasing, respectively, user engagement through a contin- uous pursuit task and associated training paradigm and the spatial resolution of noninvasive neural data through EEG source imaging. In all, our unique framework enhanced BCI learning by nearly 60% for traditional center-out tasks and by more than 500% in the more realistic continuous pursuit task. We further demonstrated an additional enhancement in BCI control of almost 10% by using online noninvasive neuroimaging. Last, this framework was deployed in a physical task, demonstrating a near-seamless transition from the control of an unconstrained virtual cursor to the real-time control of a robotic arm. Such combined advances in the quality of neural decoding and the practical utility of noninvasive robotic arm control will have major implications for the eventual development and implementation of neurorobotics by means of noninvasive BCI. INTRODUCTION Detecting mental intent and controlling external devices through brain-computer interface (BCI) technology has opened doors to improving the lives of patients suffering from various neurological disorders, including amyotrophic lateral sclerosis and spinal cord injury (15). These realizations have enabled patients to communi- cate with attending clinicians and researchers in the laboratory by simply imagining actions of different body parts (6, 7). Although achievable task complexity varies between invasive and noninvasive systems, BCIs in both domains have restored once-lost bodily func- tions that include independent ambulation (8), functional manipula- tions of the hands (3, 4), and linguistic communication (9, 10). Hence, clinical interest is rapidly building for systems that allow patients to interact with their environment through autonomous neural control (2, 8, 11). Nevertheless, although technology targeting the restora- tion or augmentation of arm and hand control is of the highest prior- ity in the intended patient populations, electroencephalography (EEG)based BCIs targeting such restorative interventions are some of the least effective (12, 13). With exemplary clinical applications focusing on robotic- or orthosis-assisted hand control (4), it is par- amount to improve coordinated navigation of a robotic arm, because precise positioning will be vital for the success of downstream actions (14). To meet this need, we present here a unified noninvasive frame- work for the continuous EEG-based two-dimensional (2D) control of a physical robotic arm. BCI learning rates can vary among individuals, but it is generally thought that a users motivation and cognitive arousal play significant roles in the process of skill acquisition and eventual task performance (15, 16). Although levels of internal motivation vary across popula- tions and time (17), engaging users and maintaining attention via stimulating task paradigms may diminish these differences. Current BCI task paradigms overwhelmingly involve simple cued center-out tasks defined by discrete trials (DTs) of neural control (18). These tasks provide robust test beds for new decoding algorithms but do not account for the random perturbations that invariably occur in daily life. Continuous analogs, in which users are not bound by time-limited objectives, enable control strategies that facilitate the extension of BCI toward the realistic control of physical devices in the home and clinic (8). Here, to produce robust robotic arm control that would be useful for daily life, we developed a continuous pursuit (CP) task, in which users performed motor imagination (MI) to chase a randomly moving target (movies S1 to S3) (18, 19). We found that CP task training produced stronger behavioral and physiological learning effects than traditional DT task training, an effect that can be credited to the Yerkes-Dodson law (20). Poor signal quality can further complicate the ability to decode neural events, especially when using noninvasive signals such as EEG (21). Spatial filtering has long been used to denoise noninvasive BCI signals (22, 23) and has recently offered promise in detecting in- creasingly diverse realistic commands (10, 24, 25). Electrical source imaging (ESI) is one such approach that uses the electrical properties and geometry of the head to mitigate the effects of volume conduction and estimate cortical activity (26). Dramatic improvements in offline neural decoding have been observed when using ESI compared with traditional sensor techniques (24, 27); however, these approaches have yet to be validated online. By developing a real-time ESI platform, we were able to isolate and evaluate neural decoding in both the sensor 1 Department of Biomedical Engineering, University of Minnesota, Minneapolis, MN 55455, USA. 2 Department of Biomedical Engineering, Carnegie Mellon Universi- ty, Pittsburgh, PA 15213, USA. 3 Department of Neuroscience, University of Minnesota, Minneapolis, MN 55455, USA. *Present address: Stanford University, Stanford, CA 94304, USA. These authors contributed equally to this work. Present address: Massachusetts General Hospital, Charlestown, MA 02114, USA, and Harvard Medical School, Boston, MA 02115, USA. §Present address: Stanford University, Stanford, CA 94304, USA, and Veterans Affairs Palo Alto Healthcare System, Palo Alto, CA 94394, USA. ||Corresponding author. Email: [email protected] SCIENCE ROBOTICS | RESEARCH ARTICLE Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019 1 of 13 by guest on January 23, 2021 http://robotics.sciencemag.org/ Downloaded from

Transcript of Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

Page 1: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

HUMAN -ROBOT INTERACT ION

1Department of Biomedical Engineering, University of Minnesota, Minneapolis,MN 55455, USA. 2Department of Biomedical Engineering, Carnegie Mellon Universi-ty, Pittsburgh, PA 15213, USA. 3Department of Neuroscience, University of Minnesota,Minneapolis, MN 55455, USA.*Present address: Stanford University, Stanford, CA 94304, USA.†These authors contributed equally to this work.‡Present address: Massachusetts General Hospital, Charlestown, MA 02114, USA,and Harvard Medical School, Boston, MA 02115, USA.§Present address: Stanford University, Stanford, CA 94304, USA, and VeteransAffairs Palo Alto Healthcare System, Palo Alto, CA 94394, USA.||Corresponding author. Email: [email protected]

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

Copyright © 2019

The Authors, some

rights reserved;

exclusive licensee

American Association

for the Advancement

of Science. No claim

to original U.S.

Government Works

http:D

ownloaded from

Noninvasive neuroimaging enhances continuous neuraltracking for robotic device controlB. J. Edelman1*, J. Meng2†, D. Suma2†, C. Zurn1, E. Nagarajan3, B. S. Baxter1‡, C. C. Cline1§, B. He2||

Brain-computer interfaces (BCIs) using signals acquired with intracortical implants have achieved successfulhigh-dimensional robotic device control useful for completing daily tasks. However, the substantial amountof medical and surgical expertise required to correctly implant and operate these systems greatly limits theiruse beyond a few clinical cases. A noninvasive counterpart requiring less intervention that can provide high-quality control would profoundly improve the integration of BCIs into the clinical and home setting. Here, wepresent and validate a noninvasive framework using electroencephalography (EEG) to achieve the neural con-trol of a robotic device for continuous random target tracking. This framework addresses and improves uponboth the “brain” and “computer” components by increasing, respectively, user engagement through a contin-uous pursuit task and associated training paradigm and the spatial resolution of noninvasive neural datathrough EEG source imaging. In all, our unique framework enhanced BCI learning by nearly 60% for traditionalcenter-out tasks and by more than 500% in the more realistic continuous pursuit task. We further demonstratedan additional enhancement in BCI control of almost 10% by using online noninvasive neuroimaging. Last, thisframework was deployed in a physical task, demonstrating a near-seamless transition from the control of anunconstrained virtual cursor to the real-time control of a robotic arm. Such combined advances in the quality ofneural decoding and the practical utility of noninvasive robotic arm control will have major implications for theeventual development and implementation of neurorobotics by means of noninvasive BCI.

//rob

by guest on January 23, 2021

otics.sciencemag.org/

INTRODUCTIONDetecting mental intent and controlling external devices throughbrain-computer interface (BCI) technology has opened doors toimproving the lives of patients suffering from various neurologicaldisorders, including amyotrophic lateral sclerosis and spinal cordinjury (1–5). These realizations have enabled patients to communi-cate with attending clinicians and researchers in the laboratory bysimply imagining actions of different body parts (6, 7). Althoughachievable task complexity varies between invasive and noninvasivesystems, BCIs in both domains have restored once-lost bodily func-tions that include independent ambulation (8), functional manipula-tions of the hands (3, 4), and linguistic communication (9, 10). Hence,clinical interest is rapidly building for systems that allow patients tointeract with their environment through autonomous neural control(2, 8, 11). Nevertheless, although technology targeting the restora-tion or augmentation of arm and hand control is of the highest prior-ity in the intended patient populations, electroencephalography(EEG)–based BCIs targeting such restorative interventions are someof the least effective (12, 13). With exemplary clinical applicationsfocusing on robotic- or orthosis-assisted hand control (4), it is par-amount to improve coordinated navigation of a robotic arm, becauseprecise positioning will be vital for the success of downstream actions(14). To meet this need, we present here a unified noninvasive frame-

work for the continuous EEG-based two-dimensional (2D) controlof a physical robotic arm.

BCI learning rates can vary among individuals, but it is generallythought that a user’s motivation and cognitive arousal play significantroles in the process of skill acquisition and eventual task performance(15, 16). Although levels of internal motivation vary across popula-tions and time (17), engaging users and maintaining attention viastimulating task paradigms may diminish these differences. CurrentBCI task paradigms overwhelmingly involve simple cued center-outtasks defined by discrete trials (DTs) of neural control (18). Thesetasks provide robust test beds for new decoding algorithms but donot account for the random perturbations that invariably occur in dailylife. Continuous analogs, in which users are not bound by time-limitedobjectives, enable control strategies that facilitate the extension of BCItoward the realistic control of physical devices in the home and clinic(8).Here, to produce robust robotic arm control that would be useful fordaily life, we developed a continuous pursuit (CP) task, in which usersperformed motor imagination (MI) to chase a randomly moving target(movies S1 to S3) (18, 19). We found that CP task training producedstronger behavioral and physiological learning effects than traditionalDT task training, an effect that can be credited to the Yerkes-Dodsonlaw (20).

Poor signal quality can further complicate the ability to decodeneural events, especially when using noninvasive signals such asEEG (21). Spatial filtering has long been used to denoise noninvasiveBCI signals (22, 23) and has recently offered promise in detecting in-creasingly diverse realistic commands (10, 24, 25). Electrical sourceimaging (ESI) is one such approach that uses the electrical propertiesand geometry of the head tomitigate the effects of volume conductionand estimate cortical activity (26). Dramatic improvements in offlineneural decoding have been observed when using ESI compared withtraditional sensor techniques (24, 27); however, these approaches haveyet to be validated online. By developing a real-time ESI platform, wewere able to isolate and evaluate neural decoding in both the sensor

1 of 13

Page 2: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

and the source domains without introducing the confounding onlineprocessing steps that often accompany other spatial filtering tech-niques (different classifiers, time windows, etc.).

In all, the framework presented here demonstrates a systematicapproach to achieving continuous robotic arm control through thetargeted improvement of both the user learning (“brain” component)and the machine learning (“computer” component) elements of a BCI.Specifically, using a CP task training paradigm increased BCI learningby nearly 60% for traditional DT tasks and by more than 500% in themore realistic CP task. The utility of real-time ESI further introduced asignificant 10% improvement in CP BCI control for users experiencedin classical sensor-based BCI. Through the integration of these im-provements, we demonstrated the continuous control of a roboticarm (movies S4 to S7) at almost identical levels to that of virtual cursorcontrol, highlighting the potential of noninvasive BCI to translate toreal-world devices for practical tasks and eventual clinical applications.

http://rD

ownloaded from

RESULTSBefore addressing whether our online ESI-based decoding strategycould be used for the continuous control of a robotic arm, the CP taskand source signal approaches needed to be thoroughly validated asuseful training and control strategies, respectively (Fig. 1). Thirty-three individuals naïve to BCI participated in a virtual cursor BCI

by guest on January 23, 2021obotics.sciencem

ag.org/

learning phase. The training length wasset at 10 sessions to facilitate practicaldata acquisition and to establish a thresh-old for future training applications. These33 userswere split into three groups: sensordomain CP training (CP), sensor domainDT training (DT), and source domain CPtraining (using real-time ESI; sCP). Thisdesign allowed us to answer (i) whichtraining task (CPversusDT) and (ii)whichneurofeedback domain (source versus sen-sor control) led to more effective BCI skillacquisition (seeMaterials andMethods fordetails on participant demographics andbaseline group metrics). The within-session effects of source versus sensorcontrol (virtual cursor) on CP BCI per-formance were tested on 29 individuals,16 with previous BCI experience (sensorcontrol) and 13 naïve to BCI. Further-more, six individuals with BCI experience(sensor DT cursor control) participated inexperiments designed to compare perform-ances between virtual cursor and roboticarm control in a physically constrainedvariation of the CP task.

Noninvasive continuous virtualtarget tracking via motor intentThroughout all experimental sessions,users were instructed to control the trajec-tory of a virtual cursor usingMI tasks; left-and right-hand MI for the correspondingleft and right movement, and both handsMI and rest for up and down movement,

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

respectively. These tasks were chosen on the basis of previous cursorcontrol (28) and neurophysiological (19) exploration. Horizontal andvertical cursor movements were controlled independently. CP trialslasted 60 s each and required users to track a randomly moving targetwithin a square workspace (Fig. 2, A and B; fig. S1, A and B; andmoviesS1 to S3). Previous implementations of similar tasks used technician-controlled (manual) target trajectories, which can introduce inconsist-encies and biases during tracking (29). To avoid such scenarios, wegoverned target trajectories in the current work by a Gaussian randomprocess (seeMaterials andMethods).Nevertheless, it is possible for sucha random process to drive the target toward stagnation at an edge/corner, which could synthetically distort performance. Therefore, tobetter estimate the difference between DT and CP task training, andcontrary to previouswork (18, 29), our initial CP task allowed the cursorand target to fluidly wrap from one side of the workspace to the other(top to bottom, left to right, and vice versa) upon crossing an edge(Fig. 2, A andB; fig. S1, A andB; andmovies S1 to S3). Trajectories fromexperienced users were unwrapped (fig. S1C) to reveal squared trackingcorrelations of r2hor ¼ 0:48±0:20 and r2ver ¼ 0:47±0:19 (fig. S2).

BCI skill acquisition and user engagementWe investigated the utility of using the CP task for BCI skill acquisi-tion in a pre-post study design by comparing BCI performance be-tween populations trained by either CP or DT task. Twenty-two

Fig. 1. Source-based CP BCI robotic arm framework. The proposed framework addressed both user and ma-chine learning aspects of BCI technology before being implemented in the control of a realistic robotic device. Userlearning was addressed by investigating the behavioral and physiological effects of BCI training using sensor-levelneurofeedback with a traditional DT center-out task (n = 11) and a more realistic CP task (n = 11) (top left). Theeffects of BCI training were further tested in the CP task using source-level neurofeedback (n = 11) obtainedthrough online ESI with user-specific anatomical models (center). This design allowed us to determine both theoptimal task and neurofeedback domain for BCI skill acquisition. The machine learning aspect was furtherexamined across the skill spectrum by testing the effects of source-level neurofeedback, compared with sensor-level neurofeedback, in naïve (n = 13) and experienced (n = 16) users in a randomized single-blinded design (topright). The user and machine learning components of the proposed framework were then combined to achievereal-time continuous source-based control of a robotic arm (n = 6) (bottom). Comparing BCI performance of roboticarm and virtual cursor control demonstrated the ease of translating neural control of a virtual object to a realisticassistive device useful for clinical applications.

2 of 13

Page 3: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

individuals participated in a baseline session, eight training sessions,and an evaluation session. Baseline and evaluation sessions containedboth DT and CP tasks (and MI without feedback), whereas trainingsessions contained only one task type, consistent throughout trainingaccording to each user’s assigned group (DT or CP, n = 11 per group;see Materials and Methods). All sessions for both groups used scalpsensor information. 1D horizontal DT performance was used tobaseline match the two groups (fig. S3A).

Electrodes used for online control were optimized on a session-by-session basis (seeMaterials andMethods), chosen from a set of 57 sen-sors covering the sensorimotor regions. Electrodes were identified forthe horizontal and vertical control dimensions independently usingthe corresponding right-handMI versus left-handMI and both handsMI versus rest datasets. Throughout training, the two groups derivednearly identical feature (electrode) maps in the sensor domain con-taining focal bilateral scalp clusters overlying the cortical hand regions(Fig. 2C). These clusters were located andweighted in accordancewiththe underlying event-related (de)synchronization generated duringthe correspondingMI tasks (19) and are similar to those used in othernoninvasive cursor control studies, identified through either data-driven (30) or manual (28) selection processes.

DT task performance was measured in terms of percent valid cor-rect (PVC), computed as the number of hit trials divided by the totalnumber of trials, in which a final decision was made (valid trials). Thecorresponding CP task performance metric was mean squared error

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

(MSE), i.e., the average normalized squared error between the targetand cursor location over the course of a single run. Across these 22participants, the results of a repeated-measures two-way analysis ofvariance (ANOVA) revealed a significant main effect of time for boththe CPMSE (F1,20 = 7.39, P < 0.05; Fig. 2D) andDT PVC (F1,20 = 19.80,P < 0.005; Fig. 2E) metrics. To examine skill generalizability, we specif-ically considered the effects of training on the performance of familiarand unfamiliar tasks. Individuals trained with the CP task significantlyimproved in the same task after training [Tukey’s honestly significantdifference (HSD) post hoc test, P < 0.05; Fig. 2D, left bars], whereasthose trained with the DT task did not (Tukey’s HSD post hoc test,P = 0.14; Fig. 2E, right bars). Previous work has indicated that DT tasktraining can lead to strong learning effects (31); however, some usershave required nearly 70 training sessions to do so (18). When consid-ering unfamiliar tasks, the DT training group onlymodestly improvedin the CP task after training (Tukey’s HSD post hoc test, P = 0.96;Fig. 2D, right bars), whereas the CP training group displayed a sig-nificant improvement in the DT task (Tukey’s HSD post hoc test,P < 0.005; Fig. 2E, left bars).

Because the two tasks varied greatly in control dynamics, it wasdifficult to draw comparisons between these differences. Therefore,in addition to statistical testing, we also examined the effect size (pointbiserial correlation, see Materials and Methods), a measure, uncon-founded by sample size, of the magnitude of the difference withineach performance metric between baseline and evaluation sessions.

Fig. 2. BCI performance and user engagement. (A) Depiction of the CP edge wrapping feature. (B) Tracking trajectory during an example 2D CP trial. (C) Trainingfeature maps for the DT and CP training groups for horizontal (top) and vertical (bottom) cursor control. r2, squared correlation coefficient. (D and E) 2D BCIperformance for the CP (D) and DT (E) task at baseline and evaluation for the CP and DT training groups. The red dotted line indicates chance level. The effect size,∣r∣, is indicated under each pair of bars. (F) Task learning for the CP (top) and DT (bottom) tasks. (G) Eye blink EEG component scalp topography (top) and activity(bottom left) at baseline and evaluation, and activity during each task (CP versus DT) (bottom right). Bars indicate mean + SEM. Statistical analysis using a one- (F) ortwo-way repeated-measures (D, E, and G) ANOVA (n = 11 per group) with main effects of task, and time and task, respectively. Main effect of time: #P < 0.05, ###P <0.005. Tukey’s HSD post hoc test: *P < 0.05, ***P < 0.005.

3 of 13

Page 4: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

Compared with the DT group, the effect sizes were far larger for theCP group for both tasks (Fig. 2, D and E), displaying a 500% learningimprovement in the CP task and a nearly 60% learning improvementin the DT task (Fig. 2F).

To delineate the underlying physiology of these training differ-ences, we investigated user engagement during both tasks by quanti-fying eye blink activity. Decreased blink activity has been implicated inheightened attentional processes and cognitive arousal during varioustasks (32). Thesemental states can dramatically influence task trainingand performance; whereas stimulating tasks can facilitate skill acqui-sition, boring or frustrating tasks can inhibit performance (20). Theeye blink component of the EEG was extracted during the baselineand evaluation sessions using independent component analysis (ICA)(Fig. 2G and fig. S4). Across all participants, blink activity was stronglydampened at the baseline (F1,63 = 9.84, P < 0.005; Fig. 2G), suggestingheightened attention that was likely due to the novelty of BCI in general.Increased blink activity at the evaluation supports user skill acquisition,because less attention was required for improved performance. Thelarge reduction in blink activity observed during the CP task, comparedwith the DT task (F1,63 = 3.51, P = 0.066; Fig. 2G), suggests that the CPtask elicited heightened user engagement during active control, a featurethat may explain the more dramatic positive training effects.

Learning to modulate sensorimotor rhythmsWhereas BCI feedback plays a significant role in facilitating sensori-motor rhythm modulation (33), MI without feedback can provide ameasure of a user’s natural ability to produce the associated discrim-inative EEG patterns. Left-hand MI versus right-hand MI (left versusright) and both handsMI versus rest (up versus down) runs were ana-lyzed individually. An index of modulation between any two mentalstates is represented as the regression output (R2) between the EEGalpha power and the task labels (see Materials and Methods). Onlythe 57 sensorimotor electrodes used for online control were includedin this analysis. Although sensorimotor modulation significantlyincreased for both task pairs from baseline to evaluation (horizontal:F1,20 = 4.70, P < 0.05; vertical: F1,20 = 21.01, P < 0.005; Fig. 3, A and C),the spatial distribution of these improvements is more meaningful inevaluating the effectiveness of BCI training. Except for mild baselinemodulation in the DT group, no strong patterns were apparent foreither task pair before training. For the horizontal dimension at theevaluation session, the CP group produced highly focal bilateral mod-ulation patterns, whereas more global modulation was observed fortheDT group (Fig. 3B). Evaluation topographies weremore consistentbetween the two training groups for the vertical dimension (Fig. 3D).Electrodes displaying a significant improvement in modulation werefar more numerous for the CP group than for the DT group for bothhorizontal (CP, 12; DT, 3; Fig. 3E) and vertical (CP, 37; DT, 13; Fig. 3F)tasks. Furthermore, these significant electrodes clustered far closer toscalp regions covering the approximate hand cortical regions (e.g.,C3-4 and CP3-4) in the CP group. These localized changes providecompelling evidence that the enhanced behavioral improvement seenin the CP training group was accompanied by consistent physiologicalchanges in sensorimotor modulation (R2 values) (34).

Source neurofeedback does not further facilitate CPBCI learningWhereas the CP task allowed us to target user learning and progresstoward the robust online control of a robotic arm, we additionallywanted to address the machine learning element. To evaluate whether

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

real-time ESI-based decoding improved performance throughouttraining, we recruited an additional group of BCI naïve individuals(n = 11) for CP training using source neurofeedback (source control,sCP). This sCP group was baseline-matched to the previous CP (andDT) group (sensor control) (fig. S5A). For source control, we imple-mented user- and session-specific inverse models into the online de-coding pipeline for the CP task. Similar to the CP group, the sCPgroup significantly improved in both the 2D CP (Tukey’s HSD posthoc test, P < 0.05; Fig. 4A, right bars) and the 2D DT tasks (Tukey’sHSD post hoc test, P < 0.05; Fig. 4B, right bars) after training. Accord-ingly, very similar learning effects were observed for both tasks in theCP and sCP groups (Fig. 4C). The final performance and learning rateswere consistent between the two training groups (CP and sCP),supporting the groups’ shared familiar and unfamiliar task proficiency.

Feature selection in the source domain identified distinct corticalclusters, optimized through anatomical and functional constraints, foronline control and was performed on a session-by-session basis (seeMaterials andMethods). As expected, sCP training featuremaps high-lighted hand cortical regions for both control dimensions throughouttraining (Fig. 4D). It should be noted that the baseline and evaluationsessions for the sCP group were completed in the sensor domain tomaintain consistent conditions with the other training groups. Al-though training duration was fixed at eight sessions with no interme-diary testing, further investigation at different stages of learning mayhelp pinpoint when source-based decoding may benefit BCI skillacquisition.

EEG source imaging enhances neural control in definedskill statesTo thoroughly investigate the effects of source control (real-time ESI)on CP task performance (and potential future benefits for robotic armcontrol), we performed within-session comparisons of source andsensor virtual cursor control on users in stable skill states. The CP taskwas chosen for further analysis because it is more applicable to roboticarm control than the DT task and displayed both increased difficultyand skill acquisition. Our investigation included both extremes of theBCI skill spectrum; experienced users (12.8 ± 8.9 hours of previousBCI training, n = 16) participated in up to three sessions, and naïveusers (no previous BCI training, n= 13) participated in a single session(to avoid confounding effects of early learning in more than one ses-sion). User- and session-specific inverse models were also used forthese participants.

For experienced users, source control improved performance overthat of conventional sensor control, producing a significant reductionin the 2D MSE (F1,69 = 9.83, P < 0.01; Fig. 5A). Unsurprisingly, thesensor and source MSE values clustered near those of the CP traininggroup after training (evaluation), reinforcing their skilled state. Thespatial extent of the observed improvement in the CP task was char-acterized through squared error histograms (Fig. 5B), with sourcevalues shifting toward smaller errors and sensor values shifting towardlarger errors. By fitting gamma functions to these histograms, wederived a quantitative threshold, independent of cursor/target size,for statistically testing the spatial extent of the performance difference(fig. S6). Experienced users dwelt within this defined region, a discwith a diameter of 16.67% of the workspace width centered on thetarget (Fig. 5E), for significantly more time during source control thansensor control (F1,69 = 20.96, P < 0.005; Fig. 5F).

Naïve users also demonstrated an overall improvement in onlineperformance with source control, although this improvement did not

4 of 13

Page 5: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

reach significance for 2D control (F1,12 = 3.02, P = 0.11; Fig. 5C).Nevertheless, the effect size for the performance difference was nota-bly similar to that of experienced users (Fig. 5, A and C, and table S1),indicating an improvement of similar magnitude. As expected, thesensor and source control MSE values for the naïve users were com-parable with those of the CP training group before training (baseline,also naïve). This consistency, independent of skill level, highlights arobust positive influence of source control on online performance.Furthermore, the squared error histograms (Fig. 5D) and extentthreshold measures for naïve users (Fig. 5E) displayed analogoustrends to those of experienced users; however, these did not reach sig-nificance (F1,12 = 2.02, P = 0.18; Fig. 5F).

When looking at the feature maps (Fig. 5G), an important dichot-omy can be observed between naïve (weak, sporadic clusters) and ex-perienced (strong, focal clusters) users for both control dimensions thatparallels the trends previously observed in the modulation index topo-graphies before (low, sporadic modulation) and after (high, focal mod-ulation) training (Fig. 3, B and D). To quantify the focality/diffusenessof these features, we computed the spread of the group-level featuremaps (Fig. 5H), defined as the average weighted distance between thefeature location and the hand knob (source space) or C3/C4 electrode(sensor space) (see Materials and Methods). We observed both signif-

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

icant or near-significant reductions in the feature spread for experiencedusers, compared with naïve users, in both the horizontal (Mann-Whitney U test with Bonferroni correction: source, P < 0.005; sensor,P < 0.05) and the vertical (Mann-Whitney U test with Bonferroni cor-rection: source, P < 0.005; sensor, P = 0.22) control dimensions. Thisphysiological difference between naïve and experienced users is in linewith their performance difference (MSE) and further supports the con-trast in BCI proficiency among the two groups and the overarching ef-fect of source-based control depending on user skill level.

Source-based CP BCI control of a robotic armHaving robustly validated our proposed BCI framework in a con-trolled environment, we completed our study by transitioning to theapplied physical source control of a robotic arm (Fig. 6A). Althoughthe cursor and target wrapping allowed for more complicated controlstrategies and scenarios, such a feature could not exist in a real-worldsetting. Therefore, we implemented a modified form of the CP task ina robotic arm control paradigm, where the edge-wrapping feature wasreplaced with an edge repulsing feature (Fig. 6B and movies S4 to S7).Six experienced users (8.3 ± 2.9 hours of previous BCI training) par-ticipated in five source CP BCI sessions containing both virtual cursorand robotic arm control, block-randomized across individuals and

Fig. 3. Electrophysiological learning effects. (A and B) Left versus right MI task analysis. (A) Maximum sensorimotor R2 value for the CP and DT training groups forhorizontal control task. The effect size, ∣r∣, is indicated under each pair of bars. (B) R2 topographies at baseline (top row) and evaluation (bottom row) for the CP and DTtraining groups for horizontal control tasks. (C and D) Both hands versus rest MI task analysis. Same as (A) and (B) for vertical control tasks. (E and F) Statisticaltopographies indicating electrodes that displayed a significant increase in R2 values for the horizontal (E) and vertical (F) control tasks. The electrode map in the middleprovides a reference for the electrodes shown. Bar graphs below each topography provide a count for the number of electrodes meeting the various significancethresholds. Bars indicate mean + SEM. Statistical analysis using a one- (E and F) or two-way repeated-measures (A and C) ANOVA (n = 11 per group) with main effects oftime [blue, P < 0.05; green, P < 0.01; yellow, P < 0.005; red outline, P < 0.05, false discovery rate (FDR) corrected] and time (#P < 0.05, ###P < 0.005) and training task,respectively. Tukey’s HSD post hoc test: *P < 0.05.

5 of 13

Page 6: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

sessions. Because no paradigm was implemented to determine per-formance values before and after training in the modified task, partic-ipants were screened for experience and skill level beforehand (seeMaterials and Methods). Physiological support for user skill levelwas additionally observed in the group-level feature maps (Fig. 6C)that displayed comparable characteristics with those of other expe-rienced users participating in this study (Fig. 5G).

When users were directly controlling the robotic arm, the behaviorof a hidden virtual cursor was also recorded to ensure propermappingof the arm position in physical space. Across all sessions and individ-uals, median squared tracking correlation values reached r2hor ¼0:13 ðIQR ¼ 0:04� 0:32Þ and r2ver ¼ 0:09 ðIQR ¼ 0:03� 0:28Þ inthe horizontal and vertical dimensions, respectively, for 2D control.In transitioning between virtual cursor and robotic arm control, weobserved similar MSE values among the three tracking conditions—virtual cursor, hidden cursor, and robotic arm (F2,40 = 2.62, P = 0.086;Fig. 6D)—indicating a smooth transition from the control of a virtualobject to a real-world device. This likeness in control quality was fur-ther revealed through a lack of significant difference in the squaredtracking correlation (r2) for both the horizontal (F2,40 = 0.13, P =0.88; Fig. 6E) and vertical (F2,40 = 0.77, P = 0.47; Fig. 6E) dimensions.Tracking performance was significantly greater than the chancelevels determined for all control conditions and dimensions (Mann-

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

Whitney U test with Bonferroni correction, all P < 0.05). Overall, thenotable similarity between virtual cursor control and robotic arm con-trol highlights the possibility of integrating virtual cursor exposure intofuture clinical training paradigms, where patients have limited access torobotic arm training time.

DISCUSSIONThe research presented here describes an encompassing approachaimed at driving noninvasive neural control toward the realistic dailyuse of a robotic device. We have demonstrated that the CP BCI para-digm not only can be used to successfully gauge a user’s BCI proficiencybut also can serve as a more effective training tool than traditionalcenter-out DT tasks, accelerating the acquisition of neural cursor con-trol and driving the associated physiological changes. Contrary to userstrained with the DT task, those trained with the CP task displayed sig-nificant performance improvements in familiar and unfamiliar tasks(Fig. 2, D to F), demonstrating highly flexible skill acquisition. Theseresults were further supported in a third group that also trained withthe CP task (Fig. 4, A to C). Participants in this group (sCP) displayednearly identical learning effects as the original sensor CP group whiletraining with source control, providing confidence for the reproducibil-ity of the effects of CP task training.

As training progressed, it became apparent that the strategies de-veloped by users differed significantly, depending on the training task.For example, various individuals in the DT training group reportedusing strategies involving selectively attending to their hand(s) throughperipheral vision without necessarily focusing on the cursor position.Although such strategies were effective for DT tasks, users using themoften struggled with the CP tasks in the evaluation session, because themoving target and cursor required constant visual attention and adjust-ment of motor-related mental intent. In this sense, many of these userssomewhat ignored the feedback when training with the DT task andtreated it similarly to theMIwithout feedback, reducing its effectiveness(33). The lower success of such strategiesmanifestedwithin theMI EEGof the DT group as sporadic patterns of modulation after training (Fig.3D), which are also consistent with the lower levels of cognitive arousalobserved during the traditional DT task compared with the CP task(Fig. 2G).We believe that the target dynamics and screen wrapping fea-ture of the CP task (Fig. 2A) likely perturb fluid target tracking and re-quire heightened attention during cursor control. These conclusionssupport the overarching concept of integrating human factors, suchas virtual reality techniques (34, 35), into cognitive-based training toolsfor improving both user engagement and task performance (20, 36–38)and should be considered in future generations of BCIs.

Seminal works implementing similar continuous tracking tasksusing invasively acquired signals reported comparable squared track-ing correlation values over a decade ago (29). Although the field ofinvasive neural decoding has surpassed these benchmark results toinclude high degree-of-freedom and anthropomorphically func-tional tasks (3, 39, 40), qualitative similarities can be seen betweenthese two modalities. In accordance with invasive reports, users inour study struggled to keep the cursor in a single location, often ex-hibiting oscillatory tracking behavior around the target (Fig. 2B andfig. S1, A and B). Although these actions demonstrate directed cursortrajectories toward the target and highlight the ability of our system toaccurately capture the users’ dynamic mental intent, the tracking cor-relation is effectively reduced and may benefit from more advanceddecoding methods.

Fig. 4. Source-level neurofeedback. (A and B) 2D BCI performance for the CP(A) and DT (B) task at baseline and evaluation for the CP and source CP (sCP)training groups. The red dotted line indicates chance level. The effect size, ∣r∣,is indicated under each pair of bars. (C) Task learning for the CP (left) and DT(right) tasks. Bars indicate mean + SEM. Statistical analysis using a one- (C) ortwo-way repeated-measures (A and B) ANOVA (n = 11 per group) with maineffects of training decoding domain, and time and training decoding domain,respectively. Main effect of time: #P < 0.05, ###P < 0.005. Tukey’s HSD post hoctest: *P < 0.05, ***P < 0.005. n.s., not significant. (D) Group-level training featuremaps for the training groups for horizontal (top) and vertical (bottom) cursorcontrol. User-specific features were projected onto a template brain for groupaveraging.

6 of 13

Page 7: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

It has been argued that motor neurons encode cursor velocityduring neural cursor control (41), with numerous decoding algorithmsusing such properties to drastically improve user performance overclassical techniques (38). In particular, modeling neuronal behavioras a dynamical system has recently yielded significantly improved on-line decoding results (42) and may provide even more complex andefficient device control in upcoming invasive and noninvasive work.This decoding strategy would be particularly attractive to neural controlin the CP task presented here, given the clear analog of our controloutput to under-dampened control dynamics. Although this infor-mation would be valuable to reduce or eliminate the previously de-scribed cursor oscillations, it has yet to be observed whether thesedetails can be detected via scalp recordings. Nevertheless, noninvasiveneural signals have recently been shown to contain information en-coded on the spatial scale of cortical columns (submillimeter), indicat-ing the ability to decode neural activity with very fine spatial-temporalresolution from outside the skull (43).

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

Over the past few decades, the reconstruction of cortical activitythrough ESI has exemplified the push to increase the spatial specificityof noninvasive recordings and has been shown to provide superiorneural decoding when compared with scalp sensor information(23, 24, 27). Similar to these previous works, we found that, in general,source features were more correlated with cued motor-related mentalstates than sensor features (fig. S9) (44, 45). Furthermore, in closed-loopCP BCI control, we found that the inclusion of online ESI improvedperformance in naïve and experienced users, consistent with offline en-hancements (Fig. 5 and figs. S7 to S9). The increased task-specific sourcemodulation indicates a higher sensitivity for detecting changes in auser’smotor-relatedmental state and is likely a product of the principlesof ESI and its use in modeling and counteracting volume conduction.CP cursor control requires highly dynamic cognitive processes to rec-ognize and correct for the random and sudden changes in the target’strajectory during tracking.We therefore hypothesize that the fast, real-time control required during the CP paradigm takes advantage of the

Fig. 5. Online 2D CP source versus sensor BCI performance. (A and B) Experienced user performance (n = 16). (A) Group-level MSE for source and sensor 2D CPcursor control. Light and dark gray blocks represent performance for the CP training group (n = 11; Fig. 2D) before (naïve) and after training (experienced). The effectsize,∣r∣, is indicated under the pair of bars. (B) Group-level squared-error histograms for 2D CP sensor and source cursor control. (C and D) Naïve user performance(n = 13). Same as (A) and (B) for naïve user data. (E) Scale drawing of the CP paradigm workspace displaying the spatial threshold derived from experienced (yellow)and naïve (green) user data (fig. S6). (F) Cursor dwell time within the spatial threshold for experienced (left) and naïve (right) users. (G) Group-level feature maps forhorizontal (top) and vertical (bottom) cursor control for naïve (right) and experienced (left) users. User-specific features were projected onto a template brain forgroup averaging. (H) Feature spread analysis between experienced and naïve users for source (left) and sensor (right) features for horizontal (top) and vertical(bottom) control. Bars indicate mean + SEM. Statistical analysis using a one- (C and D) or two-way repeated-measures (A and B) ANOVA with main effects ofdecoding domain, and time and decoding domain, respectively. Main effect of decoding domain: ###P < 0.005 (A, C, and F), gray bar; P < 0.05 uncorrected, redbar; P < 0.05, FDR corrected (B and D). Mann-Whitney U test with Bonferroni correction for multiple comparisons (H): +P < 0.05, +++P < 0.005.

7 of 13

Page 8: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

heightened sensitivity of ESI modulation, allowing for quicker re-sponses that more accurately resemble the dynamics involved in theCP task. This phenomenon was apparent during the within-sessioncomparisons of source and sensor control (Fig. 5 and figs. S7 andS8); however, it is possible that, with sufficient training, the feedbackdomain becomes less important for skill acquisition (Fig. 4).

We feel that it is necessary to acknowledge the decline in per-formance that occurred between the original CP task and the modifiedCP task, which we believe to be strongly attributed to the task mod-

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

ifications made for the physical constraints of the robotic arm. Thepresence of the physical robotic device inherently creates a more dis-tracting environment for neural control compared with that of a virtualcursor. We found that, with the robotic arm mounted on the right sideof the user (Figs. 1, bottom, and 6A), visual obstruction of the targetwas common when the arm was directed to reach across the user tothe left side of the screen, often perturbing target tracking. In addition,although participants here displayed previous BCI proficiency, theyhad less experience than those participating in the original CP task val-idation. We believe that this combination of reduced user experienceand enhanced sensory loading caused by the more complex human-device interaction involving the robotic arm led to a reduction inperformance compared with the highly controlled virtual cursor con-trol environment.

The results presented here demonstrate that CP control provides aunique opportunity for the complex control of a virtual cursor androbotic device (14, 39), without requiring discretized, prolonged tasksequences (46) that can make even simple task completion long andfrustrating. Users were able to smoothly transition between virtualcursor and robotic arm control with minimal changes in performance(Fig. 6, D and E), indicating the potential ease of integrating such anoninvasive assistive tool into clinical applications for autonomoususe in daily life. Invasive systems have already demonstrated a levelof control similar to such a noninvasive hypothetical; however, al-though such invasive approaches may offer much-needed help to arestricted number of patients with severe physical dysfunctions, mostimpaired persons will likely not qualify for participation due to bothmedical and financial limitations. Additionally, previous work hassuggested that accessing sufficiently large patient populations forconcrete and statistically significant conclusions may be difficult toobtain (1–5, 8, 11, 29, 39). Therefore, there is a strong need to furtherdevelop noninvasive BCI technology so that it can benefit most pa-tients and even the general population in the future. The effectivetraining paradigm and additional ESI-based performance improve-ment demonstrated here, as well as the integration of such targetedenhancements toward robotic arm control, offer increasing confi-dence that noninvasive BCIs may be able to expand to widespread clin-ical investigation. We observed that, for robotic arm control, generichead models, rather than those derived from user-specific magneticresonance imaging (MRI), were sufficient for high-quality performance(see Materials and Methods). Therefore, in all, the work presented inthis paper is necessary for current EEG-based BCI paradigms to achieveuseful and effective noninvasive robotic device control, and its resultsare pertinent in directing both ongoing and future studies.

MATERIALS AND METHODSBCI tasksMotor imagery without feedbackEEG data during motor imagination (MI) without feedback werecollected at the beginning of each session, one run for left-hand MIversus right-handMI and one for the both handsMI versus rest. Eachrun consisted of 10 randomly presented trials per task. Each trial con-sisted of 3 s of rest, followed by 4 s of a visually cued MI task.DT taskThe DT paradigm was composed of fixed target locations and center-out intended cursor trajectories. This paradigm consisted of 21 trials,with targets presented in a random order. Each trial began with a 3-srest period, followed by a 2-s preparation period, in which the target

Fig. 6. Source-based CP BCI robotic arm control. (A) Robotic arm CP BCI setup.Users controlled the 2D continuous movement of a 7–degree of freedom roboticarm to track a randomly moving target on a computer screen. (B) Depiction of theCP edge repulsion feature (in contrast to the edge wrapping feature; Fig. 2A) usedto accommodate the physical limitations of the robotic arm. (C) Group-level featuremaps for the horizontal (top row) and vertical (bottom row) control dimensionsprojected onto a template brain. (D) Group-level 2D MSE for the various controlconditions. Bars indicate mean + SEM. (E) Box-and-whisker plots for the group-levelsquared tracking correlation (r2) values for the horizontal (left) and vertical (right)dimensions during 2D CP control for the various control conditions. Blue lines in-dicate the medians, tops and bottoms of the boxes indicate the 25th and 75 per-centiles, and the top and bottom whiskers indicate the respective minimum andmaximum values. Control conditions include virtual cursor (white), hidden cursor(gray), and robotic arm (black). The red dotted line indicates chance level. Statisticalanalysis using a repeated-measures two-way ANOVA (n = 6 per condition) withmain effects of time and control condition.

8 of 13

Page 9: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

was presented to the user. Users were then given up to 6 s to move thecursor to hit the target. A 1-s intertrial interval bridged two adjacenttrials. Feedback (cursor movement) was not provided during the firsttrial to calibrate the normalizer as described in the “Online signal pro-cessing” section.

During baseline and evaluation sessions, trials ended either upon acollision with a target or after 6 s with no collision. During trainingsessions, each trial lasted a full 6 s, requiring users to maintain theircursor over the target location for as long as possible within a boundary-constrainedworkspace. In this sense, during training, eachDT run con-tained 120 s of online BCI control, consistent with the 120-s CP runs.CP taskThe CP stimulus paradigm was implemented using custom Pythonscripts in the BCPy2000 application module of BCI2000 (47). Thisparadigm involved the continuous tracking of a target; each run wascomposed of two 60-s trials separated by a 1-s intertrial interval. Toproduce smoothly varying random target movement, the position ofthe target was updated in each frame using a simple kinematic model.Random motion was obtained by applying a randomly generated 1Dor 2D external force F

ext, as in Eq. 1, drawn from a zero-mean fixed-variance normal distribution.

F⇀

ext ∼ N 2ð0; s2Þ ð1Þ

To effectively limit maximum target velocity, we also applied a fric-tion force F

f and a drag force F⇀

d. The friction and drag forces are re-presented in Eqs. 2 and 3, respectively, where m indicates the coefficientof friction, d indicates the drag, and v

⇀ðtÞ indicates the velocity of thecursor at time step t. Here, ‖‖ denotes the Euclidian norm

F⇀

f ¼ �mv⇀ðtÞ

‖v⇀ðtÞ‖2ð2Þ

F⇀

v ¼ �dv⇀ðtÞ‖v⇀ðtÞ‖2 ð3Þ

When divided by the arbitrary target mass m, the combination ofthese forces represents the total instantaneous acceleration of the target.Integrating with respect to time, as noted in Eq. 4, produces the up-dated target velocity v

⇀ðtþ1Þ at the new time point

v⇀ðtþ1Þ ¼ v

⇀ðtÞ þ F⇀

ext þ F⇀

f þ F⇀

d

mdt ð4Þ

For the training and source versus sensor experiments describedin subsequent sections, the cursor and target were allowed to wrapfrom one side of the workspace to the other (left to right, top tobottom, and vice versa). Contrary to this, for the robotic arm versusvirtual cursor experiments, the target was repelled by the edges of theworkspace to make the task more realistic and accommodate thephysical limitations of the robotic arm. Repulsion was accomplishedby inverting all applied forces that would push the target continuous-ly into a wall while still randomly generating magnitudes anddirections for irrelevant forces. Unlike the target dynamics, thecursor and robotic arm could press against the edge of the boundedregion, given the appropriate force vector.

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

Noise/chance performance estimationChance performance in the CP paradigm was estimated by collecting15 (standard) or 70 (physically constrained) datasets each for 1D hor-izontal (LR), 1D vertical (UD), and 2D control tasks with the electrodesets plugged in but not connected to a human scalp. Chance performancein the DT paradigmwas determined by dividing 100% by the number oftargets in each control dimension. This is valid because trials that timeout are typically excludedwhen calculating performance for theDT task.

Experimental designSixty-eight healthy humans were informed and participated in differ-ent phases of this study after providing written consent to a protocolapproved by the relevant Institutional Review Board at the Universityof Minnesota or Carnegie Mellon University.TrainingThirty-three individuals (average age: 24.8 ± 10.6 years, 30 right handed,18 male) naïve to BCI participated in longitudinal BCI training overthe course of 10 experimental sessions that included one baseline ses-sion, eight training sessions, and one evaluation session. Participantswere tested on all tasks at the baseline and evaluation time points toassess training effectiveness, completing one block of DT tasks andone block of CP tasks, block-wise randomized across individuals.The blocks for each paradigm were composed of two runs of 1D LR,1D UD, and 2D control. Participants were divided into three traininggroups using the 1D LR DT performance as the balancing metric(figs. S3A and S5A). Naïve participants obtaining PVC values of >80%for both runs of any of the three DT dimensions were excluded fromthe training cohort because these users are often considered proficient(n = 5/38) (14, 28). Participants underwent eight training sessions at12 runs per session, with only their specified task paradigm—DT sen-sor, CP sensor, or CP source. These eight training sessions were brokeninto 2 × 1DLR, 2 × 1DUD, and 4× 2Dcontrol to progress towardmoredifficult tasks near the end of training. The evaluation session was iden-tical to the baseline session, again with the task block order randomizedacross individuals. Baseline and evaluation sessions were all completedusing sensor control for consistency across groups. Participantsunderwent two to three sessions per week with an average intersessioninterval of 3.69 ± 2.99 days.Source versus sensorTwenty-nine individuals participated in experiments testing thewithin-session effects of source versus sensor control on the CPBCI task. Sixteen users (average age: 22.67 ± 8.1 years, 15 right handed,6male) with an average of 12.8 ± 8.9 hours of previous BCI experienceand 13 users (average age: 21.8 ± 5.0 years, 12 right handed, 8 male)naïve to BCI participated in this portion of the study. Experiencedusers participated in up to three BCI sessions, and the naïve users par-ticipated in a single session to avoid the confounding effects oflearning. There were no exclusion criteria in this phase of the studybecause participants were in well-defined naïve or experienced states.A user-specific anatomical MRI was collected for each individualaccording to the MRI acquisition section. In each BCI session, partic-ipants completed 12 runs of CP BCI (4 × 1D LR, 4 × 1D UD, and4 × 2D), with the decoding strategy (sensor or source) being random-ized and balanced across the population.Robotic arm versus virtual cursorSix individuals (average age: 25.2 ± 6.5 years, five right handed, threemale, 8.3 ± 2.9 hours of previous BCI training) participated in experi-ments comparing virtual cursor and robotic arm control. Participantsfor this phase were screened using sensor-based 1D and 2DDT tasks

9 of 13

Page 10: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

using the BCI2000 AR alpha (8 to 13Hz) power estimation of C3 andC4, spatially filtered with the local pseudo-Laplacian using a NeuroscanSynamps2 (Compumedics Ltd., Victoria, Australia) 64-channel system.Participants were excluded on the basis of a two-stage performanceevaluation: (i) failure to achieve >70% 1D PVC (sessions 1 and 2)or >40% 2D PVC (session 2) in two sequential runs and (ii) failure toachieve >90% 1D PVC and >70% 2D PVC (sessions 3 to 5) in two se-quential runs. Six of 19 recruited participants passed these criteria.

All robotic arm experiments were conducted on a Samsung 43-inch 4K television, allowing large, practical workspaces for both therobotic arm and the virtual cursor. Each user participated in fivesource CP BCI sessions containing 12 runs (60 s) (sessions 1 and2, 6 × 1D LR and 6 × 1D UD; sessions 3 to 5, 3 × 1D LR, 3 × 1DUD, and 6 × 2D) of both virtual cursor and robotic arm control inblock-randomized order across users. Some users were asked to returnfor a sixth session to record video of continuous robotic arm and virtualcursor control. Robotic arm endpoint locations were mapped 1:1 tocursor positions on the screen, with inverse kinematics used to solvefor optimal joint angles and arm trajectories. The robotic arm work-space was square with a 0.48-m side length. All robotic arm controlwas conducted using the Kinova JACO Assistive Robotic Arm with athree-finger attached gripper.

For all BCI sessions, participants were seated in a padded chairabout 90 cm from a computer screen. Unless otherwise stated, userswere fitted with a 128-channel Biosemi (Biosemi, Amsterdam, TheNetherlands) EEGheadcap of appropriate size and positioned accordingto the international 10-20 system. EEG was recorded at 1024 Hz usingan ActiveTwo amplifier with active electrodes (Biosemi, Amsterdam,The Netherlands).

MRI acquisitionUser-specific anatomical MRI images were acquired on a 3T MRI ma-chine (Siemens Prisma, Erlangen, Germany) using a 32-channel headcoil. High-resolution (1-mm isotropic) anatomical imageswere acquiredfor each participant using a T1-weighted magnetization prepared rapidacquisition gradient echo sequence (repetition time/echo time, 2350ms/3.65 ms; flip angle, 7°; acquisition time, 05:06 min; R = 2 acceleration;matrix size, 256 by 256; field of view, 256 mm by 256 mm).

Frequency-domain ESIFor the source versus sensor experiments, the anatomical MRI fromeachuserwas segmented in FreeSurfer anduploaded into theMATLAB-based Brainstorm toolbox. For the robotic arm versus virtual cursorexperiments, the Colin27 template brain was used for all users. The cor-tex was downsampled to a tessellated mesh of ~15,000 surface verticesand broken into 12 bilateral regions based on the Destrieux atlas. A cen-tral region of interest (ROI), composed of various sensorimotor areas(table S2), was used for feature extraction and online source control.

At the beginning of each BCI session in the source versus sensorexperiments, EEG electrode locations were recorded using a FASTRAKdigitizer (Polhemus, Colchester, Vermont) using the Brainstorm tool-box. Electrode locations were co-registered with the user’s MRI usingthe nasion and left and right preauricular landmarks. A three-shellrealistic geometry head model with a conductivity ratio of 1:1/20:1was generated using the boundary element method implemented inthe OpenMEEG toolbox.

The inverse operator for each session was generated according tothe following theory. Equation 5 depicts the linear system relating scalpand cortical activity, where f(t) represents the scalp-recorded EEG at

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

time t, L represents the user- and session-specific leadfield, and J(t)represents the cortical current density at time t

fðtÞ ¼ LJðtÞ ð5Þ

Linear programming techniques can help stabilize the often ill-conditioned nature of the leadfield to find optimal estimates of thesource distribution. In the current work, we used Tikhonov regular-ization (Eq. 6). This optimization suggests a solution J(t) that dependson various known parameters that include the sensor covariancematrixC, source covariancematrixR, regularization parameterl, leadfield, andscalp EEG

minJ‖C�1=2ðfðtÞ � LJðtÞÞ‖22 þ l2‖R�1=2 JðtÞ‖22;

where l2 ¼ trðLRLTÞtrðCÞSNR2 ð6Þ

The closed-form solution to Eq. 6, solving for an optimal sourcedistribution JðtÞ, is shown in Eq. 7 in the time domain and belongsto the family of minimum-norm estimates. Here, 20 s of resting-stateEEG collected at the beginning of each session was used to compute adiagonal sensor covariancematrixC. The source covariance matrix wasalso a diagonal matrix, with nonzero elements containing a depth-weighted reciprocal of source location power. This modification to thesource covariance matrix forms the weighted minimum-norm estimate.

JðtÞ ¼ RLT ðLRLT þ l2CÞ�1fðtÞ ð7Þ

This solution can be applied in the frequency domain by solving forboth the real and imaginary frequency-specific cortical activity indepen-dently (45) and subsequently taking the magnitude at each cortical lo-cation (Eq. 8)

JReðf Þ ¼ RLT ðLRLT þ l2CReÞ�1fReðf ÞJImðf Þ ¼ RLT ðLRLT þ l2CImÞ�1fImðf Þ ð8Þ

To use the spatial filtering properties of inverse imaging and extracttask-related activity, we subjected the reconstructed cortical activity toboth anatomical and functional constraints. The anatomical constraintis represented by limiting cortical activity to the central sensorimotorROI previously described. The functional constraint is based on thedata-driven parcellation of the ROI into discretized, functionally coher-ent cortical clusters. Parcellation is particularly attractive for real-timeapplications because it improves the condition of the EEG inverse pro-blem and reduces computation time (48). Parcellation was performedusing the multivariate source prelocalization algorithm using the MIwithout feedback data collected at the beginning of each session (48).Solving for the activity in each of these cortical clusters extends Eqs. 8to 9, where the subscript k represents the number of cortical parcels

Jk; Reðf Þ ¼ RkLTk ∑

kLkRkL

Tk

� �þ l2ReCRe

� ��1

fReðf Þ

Jk; Imðf Þ ¼ RkLTk ∑

kLkRkL

Tk

� �þ l2ImCIm

� ��1

fImðf Þ ð9Þ

Channel-frequency optimizationEach of theMIwithout feedback runswas analyzed individually to iden-tify features used to control cursor movement in the two dimensions.

10 of 13

Page 11: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from

For the sensor domain, the alpha power (8 to 13 Hz) at each electrodewas extracted at a 1-Hz resolution using a Morlet wavelet techniquepreviously described in (24). A stepwise linear regression approachsimilar to (30) was used with a forward inclusion step (P < 0.01) andbackward removal step (P < 0.01) to find the electrodes and weightsthat best separated the two tasks used for each control dimension.This procedure was applied to frequency-specific R2 montages inthe order of descending maximum values until at least one electrodesurvived the statistic thresholding. The weight of each selected electrodewas set to −1 or +1 based on the sign of the regression beta coefficient.A weight of 0 was applied to all other electrodes not selected. If no elec-trodeswere selected for any frequency, then a default setup assigned−1and +1 to the C3 andC4 electrodes, respectively, for horizontal controland −1 and −1 to both electrodes for vertical control (28).

For feature selection in the source domain, the MI EEG was firstmapped to the cortical model according to Eq. 9. The stepwise linearregression procedure was applied to all ROI parcels, and weights wereassigned accordingly. If no parcels were selected, then the defaultsource setup was defined by assigning a weight of −1 to those parcelscontaining the left motor cortex hand knob, +1 to those containingthe right motor cortex hand knob for horizontal control, and −1 tobilateral hand knob parcels for vertical control. These parcels wereidentified on the basis of seed points assigned to the hand knobs [sim-ilar to (44)] by the operators before the experimental session.

Feature spreadwas calculated as the average Euclidean distance be-tween the feature location and the lateral hand knob (source space) orC3/C4 electrode (sensor space). The hand knob location was definedas the average location of the previously mentioned seed points. Thedistance was also weighted by the magnitude of the feature weight toaccount for its strength. Distances were calculated for the left and rightsides of the head individually and pooled together for each dimension.

Online signal processingAll online processingwas performedusing customMATLAB (MathWorksInc., MA, USA) scripts that communicated with BCI2000 using theFieldTrip buffer signal processing module. Fifty-seven electrodescovering themotor-parietal region of the scalpwere used for online pro-cessing. The EEG was downsampled to 256 Hz and bandpass-filteredbetween 8 and 13 Hz using a fourth-order Butterworth filter beforecommon average referencing. The most recent 250 ms of data was ana-lyzed and used to update the cursor velocity every 100 ms. The instan-taneous control signal was computed as the weighted sum of the alphapower in the selected electrodes. If ft(f) represents the magnitude of thealpha power across the entire EEG montage at time window t and xhand xv are vectors containing the electrode weights (1s, −1s, and 0s) as-signed during the optimization process, then the instantaneous controlsignal for each dimension can be represented as

Ch;t ¼ xTh ftðf Þ Cv;t ¼ xTv ftðf Þ ð10Þ

The velocity of the cursor in each dimension was then derived bynormalizing these values to zero mean and unit variance based onthe values stored from the previous 30 s of online control in the respec-tive dimension

V⇀

h;t ¼ Ch;t � �Ch;t�n:t�1

sCh;t�n:t�1

V⇀

v;t ¼ Cv;t � �Cv;t�n:t�1

sCv;t�n:t�1

ð11Þ

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

The same procedure was performed for source control using the re-constructed cortical frequency information J tð f Þand the correspondingcortical cluster weights. Robotic arm positions were controlled via acustom C++ script, which read and translated cursor positions intooptimal joint angles.

Offline data analysisCP data files contained cursor and target positions. These values werenormalized to the screen size and used to obtain an error, defined asthe Euclidean distance between the cursor and target, at each timepoint. The tracking correlation was computed as the Pearson correla-tion coefficient (r) between the target and cursor position time series.The MSE value was computed as the average of the error time seriesbetween these same two position vectors. The choice to use r2 (squaredtracking correlation) was based on the concept of user control; signedvalues of rmuch less than 0 are superior to small positive values (e.g.,–1 versus +0.01) because they suggest high-quality control that isinverted and that the simple inversion of weights can lead to hightracking performance. Furthermore, very few tracking correlationvalues were negative for both the original and physically constrainedCP tasks. DT data files contained target and result codes for each trialused to compute PVC values. Artifactual trials for both DT and CPruns were identified during online BCI control or by offline visual in-spection of the EEG and removed from subsequent analysis.

MIwithout feedback data files contained the 128-channel EEG andMI task labels. Nonstationary high-variance signals were initially re-moved from the raw EEG using the artifact subspace reconstructionEEGlab plugin. Bad channels were spherically interpolated. The cleanEEG was downsampled to 128 Hz, filtered between 5 and 30 Hz usinga fourth-order Butterworth filter, and re-referenced to the common av-erage. The alpha (8 to 13 Hz) power was extracted from each channelusing a Morlet wavelet for the time periods of 0.5 to 4.0 s after eachstimulus presentation; a 0.5-s delay was included to account for userreaction time (after the visual cue). The alpha power in each channeland each frequency was regressed against the task labels. For the sourcedomain, cortical alpha powerwas computed according to the frequency-domain ESI section and regressed against the task labels.

Eye activity was extracted using ICA. Clean EEG data for all tasksin the baseline and evaluation sessions were concatenated into separatedatasets and decomposed using the extended infomax algorithm. Thedimensionality of the datawas first reduced using principal componentsanalysis. The vertical and horizontal eye activity components (for fig. S4analysis) were identified as those containing high delta (1 to 4 Hz) ac-tivity and strong monopolar and bipolar frontal electrode projections,respectively (49). Not all sessions contained both distinct componentsmeeting these criteria. Blink activitywas computed as the variance of the(vertical/blink) independent component (IC) activation sequenceduring DT and CP control separately. To determine the influence ofeye activity on BCI performance, we performed a regression analysisbetween the vertical or horizontal eye activity IC activation sequenceand target location in the corresponding dimension.

Statistical analysisStatistical analysis was performed using custom R and MATLABscripts. Effect sizes are reported throughout the article as the pointbiserial correlation,∣r∣, to highlight within-group (e.g., training)and across-condition (e.g., source versus sensor and robotic arm ver-sus virtual cursor) differences. The point biserial correlation was com-puted according to Eq. 12, whereM1 andM2 are the means of the two

11 of 13

Page 12: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

httpD

ownloaded from

distributions being compared and is the pooled SD (d is also knownas Cohen’s d)

r ¼ dffiffiffiffiffiffiffiffiffiffiffiffiffid2 þ 4

p ; d ¼ M1 �M2

SDpooledð12Þ

Unless otherwise stated, two-way repeated-measures ANOVAswere used with main effects of time and training task (DT versusCP), decoding domain (source versus sensor), or control method (ro-botic arm versus virtual cursor). All behavioral and electrophysiologicalmetrics were first evaluated with the Shapiro-Wilk test to test for thenormality of the residuals of a standard ANOVA. If the P value ofthe majority of all multiple comparisons was less than 0.05, then arank-transformed ANOVA was used. Otherwise, a standard ANOVAwas used. If less than 10 multiple comparisons were made, then aTukey’s post hoc test was used to correct for multiple comparisons,and if greater than 10 comparisons were made, then FDR correction(P < 0.05) was used. A Mann-Whitney U test with Bonferroni correc-tion for multiple comparisons was used for specific cases: comparingsquared tracking correlation values (r2) of neural control with noisein the constrained CP task and comparing the feature spread in thesource and sensor domains in naïve and experienced users.

by guest on January 23, 202://robotics.sciencem

ag.org/

SUPPLEMENTARY MATERIALSrobotics.sciencemag.org/cgi/content/full/4/31/eaaw6844/DC1Fig. S1. Example CP trajectories.Fig. S2. Squared tracking correlation histograms.Fig. S3. CP versus DT BCI learning.Fig. S4. Influence of eye activity on BCI control.Fig. S5. Source versus sensor BCI learning.Fig. S6. 2D CP source versus sensor spatial threshold.Fig. S7. Online 1D horizontal CP source versus sensor BCI performance.Fig. S8. Online 1D vertical CP source versus sensor BCI performance.Fig. S9. Offline source versus sensor sensorimotor modulation.Table S1. Absolute effect sizes for source-based versus sensor-based CP control.Table S2. Source-level sensorimotor ROI anatomical structures.Movie S1. 1D horizontal CP (unconstrained) BCI virtual cursor control example trial.Movie S2. 1D vertical CP (unconstrained) BCI virtual cursor control example trial.Movie S3. 2D CP (unconstrained) BCI virtual cursor control example trial.Movie S4. 1D horizontal CP (physically constrained) BCI robotic arm control example trial.Movie S5. 1D vertical CP (physically constrained) BCI robotic arm control example trial.Movie S6. 2D CP (physically constrained) BCI robotic arm control example trial.Movie S7. 2D CP (physically constrained) BCI virtual cursor control example trial.

1

REFERENCES AND NOTES1. C. Pandarinath, P. Nuyujukian, C. H. Blabe, B. L. Sorice, J. Saab, F. R. Willett, L. R. Hochberg,

K. V. Shenoy, J. M. Henderson, High performance communication by people withparalysis using an intracortical brain-computer interface. eLife 6, e18554 (2017).

2. M. J. Vansteensel, E. G. M. Pels, M. G. Bleichner, M. P. Branco, T. Denison,Z. V. Freudenburg, P. Gosselaar, S. Leinders, T. H. Ottens, M. A. van den Boom,P. C. Van Rijen, E. J. Aarnoutse, N. F. Ramsey, Fully implanted brain–computer interface ina locked-in patient with ALS. N. Engl. J. Med. 375, 2060–2066 (2016).

3. C. E. Bouton, A. Shaikhouni, N. V. Annetta, M. A. Bockbrader, D. A. Friedenberg,D. M. Nielson, G. Sharma, P. B. Sederberg, B. C. Glenn, W. J. Mysiw, A. G. Morgan,M. Deogaonkar, A. R. Rezai, Restoring cortical control of functional movement in a humanwith quadriplegia. Nature 533, 247–250 (2016).

4. S. R. Soekadar, M. Witkowski, C. Gómez, E. Opisso, J. Medina, M. Cortese, M. Cempini,M. C. Carrozza, L. G. Cohen, N. Birbaumer, N. Vitiello, Hybrid EEG/EOG-based brain-neuralhand exoskeleton resotres fully independent living acitivies after quadriplegie. Sci. Robot.1, eeag3296 (2016).

5. U. Chaudhary, B. Xia, S. Silvoni, L. G. Cohen, N. Birbaumer, Brain–computer interface–based communication in the completely locked-in State. PLOS Biol. 15, e1002593(2017).

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

6. B. He, B. Baxter, B. J. Edelman, C. C. Cline, W. W. W. Ye, Noninvasive brain-computerinterfaces based on sensorimotor rhythms. Proc. IEEE 103, 907–925 (2015).

7. J. d. R. Millán, R. Rupp, G. R. Müller-Putz, R. Murray-Smith, C. Giugliemma, M. Tangermann,C. Vidaurre, F. Cincotti, A. Kübler, R. Leeb, C. Neuper, K.-R. Müller, D. Mattia, Combiningbrain-computer interfaces and assistive technologies: State-of-the-art and challenges.Front. Neurosci. 4, 161 (2010).

8. B. R. Leeb, L. Tonin, M. Rohm, L. Desideri, T. Carlson, J. d. R. Millán, Towardsindependence: A BCI telepresence robot for people with severe motor disabilities.Proc. IEEE 103, 969–982 (2015).

9. V. Gilja, C. Pandarinath, C. H. Blabe, P. Nuyujukian, J. D. Simeral, A. A. Sarma, B. L. Sorice,J. A. Perge, B. Jarosiewicz, L. R. Hochberg, K. V. Shenoy, J. M. Henderson, Clinicaltranslation of a high-performance neural prosthesis. Nat. Med. 21, 1142–1145 (2015).

10. X. Chen, Y. Wang, M. Nakanishi, X. Gao, T.-P. Jung, S. Gao, High-speed spelling with anoninvasive brain–computer interface. Proc. Natl. Acad. Sci. 112, E6058–E6067 (2015).

11. E. M. Holz, L. Botrel, T. Kaufmann, A. Kübler, Long-term independent brain-computerinterface home use improves quality of life of a patient in the locked-in state: A casestudy. Arch. Phys. Med. Rehabil. 96, S16–S26 (2015).

12. J. L. Collinger, M. A. Kryger, R. Barbara, T. Betler, K. Bowsher, E. H. P. Brown, S. T. Clanton,A. D. Degenhart, S. T. Foldes, R. A. Gaunt, F. E. Gyulai, E. A. Harchick, D. Harrington,J. B. Helder, T. Hemmes, M. S. Johannes, K. D. Katyal, G. S. F. Ling, A. J. C. McMorland,K. Palko, M. P. Para, J. Scheuermann, A. B. Schwartz, E. R. Skidmore, F. Solzbacher,A. V. Srikameswaran, D. P. Swanson, S. Swetz, E. C. Tyler-Kabara, M. Velliste, W. Wang,D. J. Weber, B. Wodlinger, M. L. Boninger, Collaborative approach in the development ofhigh-performance brain-computer interfaces for a neuroprosthetic arm: Translation fromanimal models to human control. Clin. Transl. Sci. 7, 52–59 (2014).

13. K. D. Anderson, Targeting recovery: Priorities of the spinal cord-injured population.J. Neurotrauma 21, 1371–1383 (2004).

14. U. Chaudhary, N. Birbaumer, A. Ramos-Murguialday, Brain–computer interfaces forcommunication and rehabilitation. Nat. Rev. Neurol. 12, 513–525 (2016).

15. M. Lotze, C. Braun, N. Birbaumer, S. Anders, L. G. Cohen, Motor learning elicited byvoluntary drive. Brain 126, 866–872 (2003).

16. S. Musallam, B. D. Corneil, B. Greger, H. Scherberger, R. A. Andersen, Cognitive controlsignals for neural prosthetics. Science 305, 258–262 (2004).

17. J. J. Daly, J. R. Wolpaw, Brain-computer interfaces in neurological rehabilitation.Lancet Neurol. 7, 1032–1043 (2008).

18. J. R. Wolpaw, D. J. McFarland, Control of a two-dimensional movement signal by anoninvasive brain-computer interface in humans. Proc. Natl. Acad. Sci. U.S.A. 101,17849–17854 (2004).

19. G. Pfurtscheller, F. H. Lopes da Silva, Event-related EEG/MEG synchronization anddesynchronization: Basic principles. Clin. Neurophysiol. 110, 1842–1857 (1999).

20. R. M. Yerkes, J. D. Dodson, The relation of strength of stimulus to rapidity of habit-formation. J. Comp. Neurol. Psychol. 18, 459–482 (1908).

21. T. Ball, M. Kern, I. Mutschler, A. Aertsen, A. Schulze-Bonhage, Signal quality ofsimultaneously recorded invasive and non-invasive EEG. Neuroimage 46, 708–716(2009).

22. H. Ramoser, J. Müller-Gerking, G. Pfurtscheller, Optimal spatial filtering of single trial EEGduring imagined hand movement. IEEE Trans. Rehabil. Eng. 8, 441–446 (2000).

23. L. Qin, L. Ding, B. He, Motor imagery classification by means of source analysis for brain-computer interface applications. J. Neural Eng. 1, 135–141 (2004).

24. B. J. Edelman, B. Baxter, B. He, EEG source imaging enhances the decoding of complexright-hand motor imagery tasks. IEEE Trans. Biomed. Eng. 63, 4–14 (2016).

25. K. K. Ang, Z. Y. Chin, C. Wang, C. Guan, H. Zhang, Filter bank common spatial patternalgorithm on BCI competition IV datasets 2a and 2b. Front. Neurosci. 6, 39 (2012).

26. B. He, A. Sohrabpour, E. Brown, Z. Liu, Electrophysiological source imaging: A noninvasivewindow to brain dynamics. Annu. Rev. Biomed. Eng. 20, 171–196 (2018).

27. V. Shenoy Handiru, A. P. Vinod, C. Guan, EEG source space analysis of the supervisedfactor analytic approach for the classification of multi-directional arm movement.J. Neural Eng. 14, 46008 (2017).

28. K. LaFleur, K. Cassady, A. Doud, K. Shades, E. Rogin, B. He, Quadcopter control in three-dimensional space using a noninvasive motor imagery-based brain–computer interface.J. Neural Eng. 10, 46003 (2013).

29. L. R. Hochberg, M. D. Serruya, G. M. Friehs, J. A. Mukand, M. Saleh, A. H. Caplan, A. Branner,D. Chen, R. D. Penn, J. P. Donoghue, Neuronal ensemble control of prosthetic devicesby a human with tetraplegia. Nature 442, 164–171 (2006).

30. D. J. McFarland, W. A. Sarnacki, J. R. Wolpaw, Electroencephalographic (EEG) control ofthree-dimensional movement. J. Neural Eng. 7, 36007 (2010).

31. N. Birbaumer, L. G. Cohen, Brain-computer interfaces: Communication and restoration ofmovement in paralysis. J. Physiol. 579, 621–636 (2007).

32. M. K. Holland, G. Tarlow, Blinking and mental load. Psychol. Rep. 31, 119–127 (1972).

33. T. Ono, A. Kimura, J. Ushiba, Daily training with realistic visual feedback improvesreproducibility of event-related desynchronisation following hand motor imagery.Clin. Neurophysiol. 124, 1779–1786 (2013).

12 of 13

Page 13: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

SC I ENCE ROBOT I C S | R E S EARCH ART I C L E

http://robotics.scienceD

ownloaded from

34. S. Perdikis, L. Tonin, S. Saeedi, C. Schneider, J. d. R. Millán, The Cybathlon BCI race:Successful longitudinal mutual learning with two tetraplegic users. PLOS Biol. 16,e2003787 (2018).

35. J. A. Anguera, J. Boccanfuso, J. L. Rintoul, O. Al-Hashimi, F. Faraji, J. Janowich, E. Kong,Y. Larraburo, C. Rolle, E. Johnston, A. Gazzaley, Video game training enhances cognitivecontrol in older adults. Nature 501, 97–101 (2013).

36. A. Myrden, T. Chau, Effects of user mental state on EEG-BCI performance.Front. Hum. Neurosci. 9, 308 (2015).

37. J. Gottlieb, Attention, learning, and the value of information. Neuron 76, 281–295 (2012).38. V. Gilja, P. Nuyujukian, C. A. Chestek, J. P. Cunningham, B. M. Yu, J. M. Fan,

M. M. Churchland, M. T. Kaufman, J. C. Kao, S. I. Ryu, K. V. Shenoy, A high-performanceneural prosthesis enabled by control algorithm design. Nat. Neurosci. 15, 1752–1757(2012).

39. L. R. Hochberg, D. Bacher, B. Jarosiewicz, N. Y. Masse, J. D. Simeral, J. Vogel, S. Haddadin,J. Liu, S. S. Cash, P. van der Smagt, J. P. Donoghue, Reach and grasp by people withtetraplegia using a neurally controlled robotic arm. Nature 485, 372–375 (2012).

40. B. Wodlinger, J. E. Downey, E. C. Tyler-Kabara, A. B. Schwartz, M. L. Boninger, J. L. Collinger,Ten-dimensional anthropomorphic arm control in a human brain-machine interface:Difficulties, solutions, and limitations. J. Neural Eng. 12, 16011 (2015).

41. J. M. Carmena, M. A. Lebedev, R. E. Crist, J. E. O’Deherty, D. M. Santucci, D. F. Dimitrov,P. G. Patil, C. S. Henriquez, M. A. L. Nicolelis, Learning to control a brain-machine interfacefor reaching and grasping by primates. PLOS Biol. 1, 193–208 (2003).

42. J. C. Kao, P. Nuyujukian, S. I. Ryu, M. M. Churchland, J. P. Cunningham, K. V. Shenoy,Single-trial dynamics of motor cortex and their applications to brain-machine interfaces.Nat. Commun. 6, 7759 (2015).

43. R. M. Cichy, F. M. Ramirez, D. Pantazis, Can visual information encoded in corticalcolumns be decoded from magnetoencephalography data in humans? Neuroimage 121,193–204 (2015).

44. F. Cincotti, D. Mattia, F. Aloise, S. Bufalari, L. Astolfi, F. De Vico Fallani, A. Tocci, L. Bianchi,M. G. Marciani, S. Gao, J. Millan, F. Babiloni, High-resolution EEG techniques for brain-computer interface applications. J. Neurosci. Methods 167, 31–42 (2008).

45. H. Yuan, A. Doud, A. Gururajan, B. He, Cortical imaging of event-related (de)synchronization during online control of brain-computer interface using minimum-norm estimates in frequency domain. IEEE Trans. Neural Syst. Rehabil. Eng. 16,425–431 (2008).

Edelman et al., Sci. Robot. 4, eaaw6844 (2019) 19 June 2019

46. J. Meng, S. Zhang, A. Bekyo, J. Olsoe, B. Baxter, B. He, Noninvasive electroencephalogrambased control of a robotic arm for reach and grasp tasks. Sci. Rep. 6, 38565 (2016).

47. G. Schalk, D. J. McFarland, T. Hinterberger, N. Birbaumer, J. R. Wolpaw, BCI2000:A general-purpose brain-computer interface (BCI) system. IEEE Trans. Biomed. Eng. 51,1034–1043 (2004).

48. J. Mattout, M. Pélégrini-Issac, L. Garnero, H. Benali, Multivariate source prelocalization(MSP): Use of functionally informed basis functions for better conditioning the MEGinverse problem. Neuroimage 26, 356–373 (2005).

49. T.-P. Jung, S. Makeig, M. Westerfield, J. Townsend, E. Courchesne, T. J. Sejnowski, Removalof eye activity artifacts from visual event-related potentials in normal and clinicalsubjects. Clin. Neurophysiol. 111, 1745–1758 (2000).

Acknowledgments: We thank A. Sohrabpour for helpful discussions and S. Sherman for helpin participant recruitment. Funding: This work was supported, in part, by NIH AT009263,MH114233, EB021027, NS096761, and NSF CBET-1264782. B.J.E. was supported, in part, byNIH/NINDS Fellowship F31NS096964. Author contributions: B.J.E., J.M., D.S., B.S.B., andB.H. designed experiments. B.J.E., J.M., D.S., C.Z., and E.N. collected the data. B.J.E., J.M.,and D.S. analyzed the data. B.J.E., J.M., D.S., and B.H. wrote the manuscript. B.J.E. created thereal-time processing software. C.C.C. designed the CP task. J.M. designed the robotic arminterface. B.H. supervised the project involving all aspects of the study design and datacollection and analysis. Competing interests: The authors declare that they have nocompeting interests. Data and materials availability: All data needed to evaluate theconclusions in the paper are present in the paper or the Supplementary Materials. BCI andEEG data used to create Figs. 2, 3, and 5 will be available at doi:10.5061/dryad.v46p2jh.Additional materials can be requested from B.H. under a material transfer agreement withCarnegie Mellon University.

Submitted 15 January 2019Accepted 17 May 2019Published 19 June 201910.1126/scirobotics.aaw6844

Citation: B. J. Edelman, J. Meng, D. Suma, C. Zurn, E. Nagarajan, B. S. Baxter, C. C. Cline, B. He,Noninvasive neuroimaging enhances continuous neural tracking for robotic device control.Sci. Robot. 4, eaaw6844 (2019).

ma

13 of 13

by guest on January 23, 2021g.org/

Page 14: Noninvasive neuroimaging enhances continuous neural ... · 1,20 =7.39, P

controlNoninvasive neuroimaging enhances continuous neural tracking for robotic device

B. J. Edelman, J. Meng, D. Suma, C. Zurn, E. Nagarajan, B. S. Baxter, C. C. Cline and B. He

DOI: 10.1126/scirobotics.aaw6844, eaaw6844.4Sci. Robotics 

ARTICLE TOOLS http://robotics.sciencemag.org/content/4/31/eaaw6844

MATERIALSSUPPLEMENTARY http://robotics.sciencemag.org/content/suppl/2019/06/17/4.31.eaaw6844.DC1

REFERENCES

http://robotics.sciencemag.org/content/4/31/eaaw6844#BIBLThis article cites 49 articles, 3 of which you can access for free

PERMISSIONS http://www.sciencemag.org/help/reprints-and-permissions

Terms of ServiceUse of this article is subject to the

is a registered trademark of AAAS.Science RoboticsNew York Avenue NW, Washington, DC 20005. The title (ISSN 2470-9476) is published by the American Association for the Advancement of Science, 1200Science Robotics

of Science. No claim to original U.S. Government WorksCopyright © 2019 The Authors, some rights reserved; exclusive licensee American Association for the Advancement

by guest on January 23, 2021http://robotics.sciencem

ag.org/D

ownloaded from