Neuromorphic Sensing and Control of Autonomous Micro...
Transcript of Neuromorphic Sensing and Control of Autonomous Micro...
Neuromorphic Sensing and Control of
Autonomous Micro-Aerial
Vehicles
Commercialization Fellowship Technical Presentation
Taylor Clawson
Ithaca, NY
June 12, 2018
• 3rd year PhD student in Mechanical and Aerospace Engineering
– Advisor: Silvia Ferrari
• Past Experience
– Mechanical Engineering B.S., Utah State University, 2013
– Lead Developer of custom reliability
analysis software at Northrop Grumman
• Graduate Research
– Dynamics and Controls emphasizing neuromorphic sensing and control
– Flight modeling for insect-scale robots
– Neuromorphic autonomy for micro aerial
vehicles
– Autonomous target tracking and obstacle avoidance
About Me: Taylor Clawson
• Neuromorphic sensing and control algorithms for intelligent, energy-efficient
autonomy
Neuromorphic Sensing and Control
3
inivation (https://inivation.com/)
Neuromorphic cameras have 1μs temporal
resolution and require at most a few
milliwatts of power
Spiking neural networks (SNNs) can learn
online to improve performance or adapt to
new conditions
The Innovation
1
2
• Safer: weigh less than 1 pound and thus are safe to
operate near humans
• Smaller: can access narrow or confined spaces
inaccessible to other vehicles
• Autonomous flight expands the capability of a single
operator to monitor a larger area
– More effective search and rescue
– Agricultural monitoring
– Public event security
Motivation: Small-Scale Autonomous Flight
Crazyflie 2.0 (https://www.bitcraze.io/crazyflie-2/)
27g 9cm
RoboBee [Ma, 2013]
Neuromorphic Control
5
Single-layer SNN Controller
6
( ) ( ) ( ( ) )t t F t= = +y Ws W Mx b• SNN function approximation by connection
weights M, W, and b
• Output connection weights W determined
offline by supervised learning
• Training data set generated by a stabilizing
target control law (e.g. optimal linear control)
2
arg min ( ) ( )j j
j
F= − +V
W f x V Mx b
Input Connection Weights
Output Connection Weights
Input bias
Post-synaptic current
FNonlinear activation
function
f(xj) Target control law data
MNumber of training data
points
M
W
b
( )ts
( ) , ( ) 1, ,j j Mj= =x f x
• Neurons generate spike trains ρ(t) based on
input current I(t)
• Synapses filter the spikes and generate post-
synaptic current s(t)
• Synapses modeled as first-order low-pass
filters h(t)
SNN Control Model
7
1 1
( ) ( ) ( )M M
k k
k k
t t t t = =
= = −
/1( ) st
s
h t e
−=
0( ) ( ) ( )
t
s t h t d = −
Dirac delta
Time of kth spike
Spike count
Synaptic time constant
kt
M
s
• First step towards full flight envelope control
• Control signal u(t) provided entirely by spiking
neural networks
Adaptive SNN Controller (Hovering Only)
8
• Non-adaptive term u0(t) trained offline by supervised
learning to approximate traditional control law
• Adaptive term uadapt(t) adapts online to minimize
output error
xref Reference state
u0
Non-adaptive control
input
uadapt Adaptive control input
Δx State error
ua Amplitude input
up Pitch input
ur Roll input
0( ) ( ) ( )adaptt t t+=u u u
[T. S. Clawson, T. C. Stewart, C. Eliasmith, S. Ferr ri “A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s,”
IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017]
Adaptive SNN Controller (Hovering Only)
9
• Adaptive term uadapt comprised
of inputs for flapping amplitude
ua, pitch up, roll ur
• Output weights adapt online to
minimize output error
• Every element of uadapt computed from a single network of 100 neurons, e.g.
• The connection weights are updated
online to minimize output error
( ) ( ) ( ) ( )T
adapt a p ru ut t t u t= u
.
( () ( ))( )T x tE t t += Λ x
( )a a au t= W s
.
( ) ( ) ( )a at t E t=W s
Error weight matrix
State error
Output connection weights
Post-synaptic current
Learning rate
Λ
x
aW
as
[T. S. Clawson, T. C. Stewart, C. Eliasmith, S. Ferr ri “A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s,”
IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017]
• SNN initialized with traditional controller
• SNN controller quickly adapts to wing
asymmetries to stabilize hovering flight
• Traditional controller maintains stability,
but drifts significantly from the target
Adaptive SNN Controller (Hovering Only)
10
.
.
.
.
. .
. .
Ad p ive SNN
IF
overi g rge
s
.
.
.
.
.
.
.
s
Ad p ive SNN
IF
Comparison:
[T. S. Clawson, T. C. Stewart, C. Eliasmith, S. Ferr ri “A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -sc e Robo s,”
IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017]
• SNN trained to approximate steady-state gain of gain-scheduled controller
• Gain matrices dependent on scheduling variables a
• Steady-state gain computed using transfer
function and final value theorem
• Network output weights computed to
approximate steady-state gain matrix Kss
• SNN Control input is a linear
transformation of post-synaptic current
SNN Controller – Full Flight Envelope
11
.
1 2 3( ) ) ( ) ) ( ) ) ( )( ( (t t t t= − − −u K a x K a u K a ξ
1
2 1( ) ( )) )( (s s −− +G I K a K a
1
2 1(0) ( ) ) )( (ss
−= −G K a K a K a
2
(argmin ) ( )ss j
j
F= − +V
W K a V Ma b
( ) ( ) ( )t t=u W a s
PIF gain matrices State deviation
Control deviation Integral of output error
Scheduling variables Transfer function
Laplace variable Steady-state gain matrix
Output connection
weights
Post-synaptic current
iK x
u
( )sGa
sssK
W s
SNN Control – Climbing Turn
12
SNN Control – Complete Turn
13
Neuromorphic Sensing
14
• Onboard exteroceptive sensors required for full
flight autonomy
• Fast dominant time scales of small-scale flight
require high sensing rate and low latency
– Traditional sensors consume large amounts of
power for high sensing rate (e.g. ~100 watts
for high speed camera)
– High data rate requires additional data
processing
• Neuromorphic vision sensors have 1μs temporal
resolution and require at most a few milliwatts of
power [Lichtsteiner, ’ 8], [Brandli, ’ ]
Exteroceptive Sensing Motivation
15Image Credit: inivation (https://inivation.com)
• Neuromorphic cameras generate
asynchronous events instead of frames
• An event at (x, y) is generated at time
ti, with polarity
• “O ” eve s whe
• “Off” eve s whe
• The ith event ei is described by the
tuple
• The set of all events is
Background: Neuromorphic Vision
16
Source
“O ” Eve s “Off” Eve s
( , , , )i ix y t p=e
,x y + t + { 1,1}p −
1ip = −
1ip =
1, , }{ i i N== e
1
1
)) ln( ( , ,
)) ln( ( ,,
1, if ln( ( , , ))
1, if ln( ( , )),
i i
i
i i
I x y tp
y
I x y yt
I x t
I x t
−
−
= −− =
−=
−
t (s
)
t (s
)
• Scattered events are generated by motion of the point
• Determine optical flow by estimating the motion of
points in the scene using the scattered events
The Optical Flow Problem
2
1
2 1
2 1
) ( ) ( )( )
(,) (
) ( ) ( )(
x x x
y y y
t
y
x
t
r r t dt
r t
t v vd
r t v t vd
=
−
− = v v
• Coordinates of some point in the
image plane determined by optical flow
• Assume:
• Determine horizontal and vertical flow (vx, vy) from
( , , )( , , )( , , ) ( , , ) ( , , ) 0
( , , )
x
x y t
y
v x y tdI x y tI x y t I x y t I x y t
v x y tdt
= + =
Standard Optical Flow Problem
( , , )0
dI x y t
dt=
Neuromorphic Optical FlowT
x yr r= r
Neuromorphic Optical Flow
• Assume gradient n of event rate is normal to
the motion of points in the scene
• Speed of the motion is inversely proportional
to magnitude of gradient
• Optical flow is written directly in terms of the
event rate gradient 2 2
1x
y b
v ac
bav
= −
= +
w
w w
• Existing neuromorphic optical flow methods rely
on optimization [Be os , ’ ], [Rueckauer, ’ ]
• Estimate continuous motion from discrete events
• Introduce continuous event rate f through
convolution of events with continuous kernel K
( , , ) ( , , ) ( , , )f x y t K x y t E x y t= 1
( , , )( , , ) i i
N
i
i
E x x y y tx y t t=
= − − −
n
T
a b c=n
T Tt t a b
x y c c
= =
− −
w
Neuromorphic Optical Flow Results
19
Mean processing time
per event: 0.7 μs
Neuromorphic Motion Detection
20
Detect motion relative to the environment
using a rotating neuromorphic camera
Camera View
World View
Assumptions
• Known camera motion
• Camera motion dominated by rotation
• Total derivative of pixel intensity is zero
Neuromorphic Camera
• Pixel intensity can be recovered by integrating the event rate
• By previous assumptions, future pixel intensity can be predicted if image-plane
motion field (vx, vy) is known
• Motion field due to camera rotation is
• Pixel intensity after a short time Δt is predicted
from motion field:
Neuromorphic Motion Detection
21
2
2
( , ) ( ) ( ) ( ) ( )
( , ) ( ) ( ) ( ) ( )
x y x y z
y y x z x
x xyv x y t t f t y t
f f
xy yv x y t t x t f t
f f
= − + − +
= − + − +
( )0( , , ) exp ( , , ) ( , ,0)
t
I x y t f x y d I x y = +
( , , ) ( , , )x yI x y t I x v t y v t t t= − − −
1. Compute difference between
predicted and measured intensity
2. Denoise by convolving with a
multivariate Gaussian kernel KΣ
with covariance Σ
3. Detect motion by comparing
smoothed intensity difference
with a threshold γ
Neuromorphic Motion Detection Results
22
1, if I (x, y, t) .( , , )
0, otherwise.m x y t
=
('( , , ) ( ,) ,, , )xI x y t K It x y ty=
I 'I
I m
( , , ) ( , , ) ( , , )I x y t I x y t I x y t = −
1 2
3a 3b
Summary
23
• Neuromorphic sensing and control algorithms for
intelligent, energy-efficient autonomy
Innovation
• Autonomous flight for small-scale UAVs
– Real-time target detection
– Obstacle avoidance
Potential Applications
Crazyflie 2.0 (https://www.bitcraze.io/crazyflie-2/)
27g 9cm
Neuromorphic CameraStandard Camera Detected MotionDirection of Motion
Neuromorphic Sensing and Control of
Autonomous Micro-Aerial Vehicles
Taylor Clawson
June 12, 2018
24
Related Work
. S. C wso , S. Ferr ri, S. B. Fu er, R. J. Wood, “Spiki g Neur Ne work SNN Co ro of F ppi g I sec -scale
Robo ,” Proc. of the IEEE Conference on Decision and Control, Las Vegas, NV, pp. 3381-3388, December 2016.
. S. C wso , S. B. Fu er, R. J. Wood, S. Ferr ri “A B de E e e Appro ch o Mode i g Aerod ic F igh of
Insect-sc e Robo ,” American Control Conference (ACC), Seattle, WA, May 2017.
T. S. Clawson, T. C. Stewart, C. Eliasmith, S. Ferr ri “A Ad p ive Spiki g Neur Co ro er for F ppi g I sec -scale
Robo s,” IEEE Symposium Series on Computational Intelligence (SSCI), Honolulu, HI, December 2017.