Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

67
1 Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments Pramod K. Varshney Kishan G. Mehrotra C. Krishna Mohan Electrical Engineering and Computer Science Dept. Syracuse University Syracuse, NY 13244 Phone: (315) 443-4013 Email: [email protected]

description

Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments. Pramod K. Varshney Kishan G. Mehrotra C. Krishna Mohan Electrical Engineering and Computer Science Dept. Syracuse University Syracuse, NY 13244 Phone: (315) 443-4013 Email: [email protected]. - PowerPoint PPT Presentation

Transcript of Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

Page 1: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

1

TemporalUncertainty Computation,

Fusion, and Visualization in Multisensor Environments

Pramod K. VarshneyKishan G. MehrotraC. Krishna Mohan

Electrical Engineering and Computer Science Dept.Syracuse UniversitySyracuse, NY 13244

Phone: (315) 443-4013 Email: [email protected]

Page 2: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

2

Outline

• Introduction• Temporal Update Mechanisms for Decision

Making in Probabilistic Networks • Sensor and Bandwidth Management in

Distributed Sensor Networks• Temporal Fusion in Multi-Sensor Target Tracking

Systems• Uncertainty Computation and Visualization• Concluding Remarks

Page 3: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

3

Information Acquisition andFusion Model for Visualization

• Dynamic network connectivity with varying bandwidths

• Heterogeneous mobile agents in terms of resources and capabilities

CommunicationNetwork

HCI andVisualization

MobileAgent 1

MobileAgent N

Command &Control Center

Mobile Agent iInfo. processing &

fusion

HCI andVisualization

Info. processing &fusion

Page 4: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

4

Sample Military Scenario

Page 5: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

5

Technical Objectives

• Decentralized inferencing algorithms• Data/information fusion models and

algorithms• Algorithms for uncertainty computation and

integration• Methods for uncertainty representation and

visualization• Experimentation with real data and testbeds

Page 6: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

6

Main Accomplishments

• Development of information fusion and visualization algorithms that take temporal effects into account– Decision making in Bayesian networks– Sequential detection problems– Target tracking– Uncertainty visualization of mobile objects

Page 7: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

7

Temporal Effects

• Multiple mobile observers with different reliability characteristics send in reports at different points in time

• Target being observed is itself changing in observable or inferable characteristics

• Information arriving later is expected to be more reliable and relevant than earlier information

Page 8: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

8

Temporal Update Mechanisms for Decision Making with Aging Observations in Probabilistic

Networks

Page 9: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

9

Background

• Bayesian causal networks are being used for modeling many important uncertainty-related problems (cf. current work by Decision-Making under Uncertainty MURIs)

• Practical battlefield management tasks involve reasoning with uncertainty that varies over time, e.g., observations lose their predictive power as time elapses, and visual observations are more reliable in daytime (better visibility conditions).

Page 10: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

10

Objectives

• To incorporate time-dependence of observations and evidence in Bayesian inference networks.

• To model a wide range of time-dependent uncertainty computations using few parameters that can be queried or learned based on past data.

• To develop an easily usable tool that visualizes and updates time-dependent uncertainty measures in multisensor hierarchical decision-making environments.

Page 11: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

11

Related Work• Dean and Kanazawa, 1989: Survivor functions used to represent

changing beliefs– Limited modeling power

• Kjaerulff 1995 and others: Causal networks with nodes duplicated for different time slices– Networks become very large and are difficult to compute with– Darwiche, 2001 proposes algorithms to improve their space and

time complexity

• Tawfik and Neufeld, 1996: Markov chain representations used to analyze the degeneration of relevance of information with time– Difficult to use in practice, especially when computations must

also depend on actual time points at which observations are made

Page 12: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

12

Detection/Recognition of an Object

Sensor 1

Sensor 2

Sensor 3Object

Processor 1

Processor 2

CentralDecision Maker

Page 13: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

13

Information Flow

• Central decision maker generates the global inference while accounting for time delays

Page 14: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

14

Causal Network Model

These arrows represent the causal links between nodes

Presence of atarget

Readings ofSensor 1

Readings ofSensor 2

Readings ofSensor 3Presence at a

later time

Report fromprocessor 1

Report fromprocessor 2

Page 15: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

15

Conditional Independence

• Inferences about the probability of B at time tB>tA are made based on the priors and the pairwise conditional probabilities associated with the links in the figure

• P(B:tB|A:tA)= P(B:tB|B:tA) . P(B:tA|A:tA) +

P(B:tB|~B:tA) . P(~B:tA|A:tA)

Page 16: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

16

Temporal Belief Updates

• We have developed temporal belief update algorithms that address:– Dependence of conditional probabilities on

absolute times tA and tB

– Dependence of conditional probabilities on relative time delays (tA – tB)

Page 17: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

17

Relative Time-Decay Model

linear

Exponential

• Juxtaposition of f and g models a large variety of practical scenarios

Page 18: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

18

Two Temporal Update Models

• Lazy (Belief update on demand)

• Non-Lazy (Steady updates)

Page 19: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

19

Lazy Belief Updating

• Computation by B needs to be carried out only when node C requests the latest belief of B, given the most recent observation at A

• The conditional probability associated with an evidential edge does not require temporal updating until an observation is actually made at that node

Page 20: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

20

Non-Lazy Belief Updating

• Time-dependent updates are restricted to edges between non-evidential nodes and are performed on a periodic basis

• The belief at each node decays steadily using a fixed multiplicative decay constant, e.g., P(C:tC+1|B:tB)=k.P(C:tC|B:tB)+(1-k).P(C)

for tC>tB

Page 21: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

21

Implementation of Relative Case

• A tool was developed in Matlab, implementing the relative time-delay model with lazy belief updating

• A graphical user interface facilitates updating and viewing of results

Page 22: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

22

Single Target Example

Target

Reading of sensor one

Reading of sensor two

Reading of sensor three

Report from processor one

Report from processor two

Page 23: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

23

Synchronous Reports (Single Target)

• The simulation shows that the probability of uppermost node decays toward 0.5 (the pre-assigned prior probability)

Page 24: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

24

Decay of Inference Hypothesis Probability (Single Target)

Page 25: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

25

Asynchronous Reports (Single Target)

• At time 0, no information is available from either processor

• At time 1, the first processor reports a positive sighting

• At time 2, the second processor reports a positive sighting

Page 26: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

26

Temporal Updates of Inference Hypothesis Probability: Asynchronous

Reports (Single Target)

Page 27: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

27

Asynchronous Reports (Single Target)

• At time 0, both processors report a negative sighting.

• At time 1, the first processor reports a positive sighting

• At time 2, the second processor reports a positive sighting

Page 28: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

28

Temporal Updates of Inference Hypothesis Probability: Asynchronous

Reports (Single Target)

Page 29: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

29

Multiple Targets Example

Target 1

Reading of sensor one

Reading of sensor two Reading of

sensor three

Report from processor one

Report from processor two

Target 2

Page 30: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

30

Processors with Different Temporal Decay Parameters

• Thicker lines indicate stronger links (higher conditional probs.)

• Info. from first observer decays imperceptibly.

• Info. from observer 2 decays fast with time

Page 31: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

31

Multiple Targets Case

Page 32: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

32

Future Work (1)• Position uncertainty modeling using hierarchical spatial

grids along with the network models• Target classification using the network model (non-binary

hypothesis nodes)• Modeling practical large-sized problems using the new tool• Applying data-driven learning algorithms to determine

time-dependence of conditional and prior probabilities, based on data

• Knowledge-elicitation process to develop the right time-dependent uncertainty model.

• Improving network visualization and user interface (UCSC) • Test with mobile visualization testbeds (Ga Tech and USC)

Page 33: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

33

Sensor and Bandwidth Management in Distributed

Sensor Networks

Page 34: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

34

Bandwidth and Energy Considerations

• Reduction of communication cost is a key focus of distributed sensor networks– Bandwidth– Energy

• Bandwidth constraints necessitate the compression of data collected at local sensors

Page 35: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

35

Key Questions

• What is the relationship between data compression and the resulting system performance?

• If a fixed amount of total bandwidth is available, then what is the optimal allocation of bandwidth (bits) to heterogeneous sensors?

Page 36: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

36

Tradeoff

• Tradeoff between the bandwidth, decision quality (QoS) and time-to-decide– Fixed sample size (FSS) detection problems

• Bayesian criterion: optimal bandwidth distribution across sensors to achieve minimum probability of error

– Sequential detection problems• Optimal bandwidth distribution across sensors

to achieve minimum time delay of decision making for specified detection performance

Page 37: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

37

Distributed Sequential Detection

denotes the number of bits assigned to sensor i=1,2,…,M

Local Sensor #1

Local Sensor #2

Local Sensor #M

FusionCenter

,, 1211 xx

,, 2221 xx

,, 21 MM xx

1Q

2Q

MQ

fu

,, 21 MM yy

,, 2221 yy

,, 1211 yy

,2 ,1 , 12 ,,1 ,0 kly inik

in

Page 38: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

38

Quantization and Decision-Making

• Local sensor, Qi, quantizes into m-ary variables, , prior to transmission

• Quantized data, , are sent to the fusion center where a sequential data fusion scheme is implemented to reach a global decision

ikx

iky

iky

Page 39: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

39

Sequential Prob. Ratio Test

• At time t, fusion center performs the SPRT as follows:

where

continue otherwise

accept stop, 1

log

accept stop, 1

log

log1 0

1

1 1

12

0 0

1 H

H

P

Ply

t

k

M

i ll

i

li

ik

in

1,0,,,2,1

|P , if ,0

if ,1)(1

jMi

HlyPly

lyly jik

lij

ik

ikik

Page 40: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

40

Average Sample Number

• Neglecting the excesses over the test thresholds, the average sample number (ASN) when is true is

where

jH

M

i ll

i

lil

ij

jj

j in

P

PP

PPASN

1

12

0 0

1log

1log1

1log

1for 1

0for

j

jPj

Page 41: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

41

Bandwidth Management

Goals:

• Partition available bandwidth B optimally into

• Optimally quantize each sensor’s observation space

• Optimality criterion: minimization of ASN

.0 },,,,{ 21 iM nnnn

Page 42: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

42

Bandwidth Allocation Algorithm

• Optimization algorithm– Sort the sensors in decreasing order of SNR– For b=1 to B, do: Scan the sensor in the above sorted order and assign the

bth digit to the sensor that minimizes ASN

• Assignment of incremental bandwidth to more informative sensors results in better performance in terms of ASN

• Because of the concavity of ASN as a function of B, this systematic approach based on marginal analysis generates an optimal bit allocation

Page 43: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

43

Target Detection Example

• A distributed sensor network consists of ten sensors of different capabilities in terms of SNR

• Task: detect if there is a target or not , which is assumed to be equiprobable

• Constraint: Total available bandwidth is limited• Goal: Make a decision as quickly as possible

while still satisfying the specified probabilities of false alarm and missed detections

0H 1H

Page 44: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

44

Bit Allocation for Different Bandwidth Constraints

# of available bits

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10 # of available bits

S1 S2 S3 S4 S5 S6 S7 S8 S9 S10

12345678910111213

1 0 0 0 0 0 0 0 0 01 1 0 0 0 0 0 0 0 01 1 1 0 0 0 0 0 0 01 1 1 1 0 0 0 0 0 01 1 1 1 1 0 0 0 0 02 1 1 1 1 0 0 0 0 02 1 1 1 1 1 0 0 0 02 1 1 1 1 1 1 0 0 02 2 1 1 1 1 1 0 0 02 2 1 1 1 1 1 1 0 02 2 1 1 1 1 1 1 1 0 2 2 2 1 1 1 1 1 1 02 2 2 1 1 1 1 1 1 1

141516171819202122232425

2 2 2 2 1 1 1 1 1 12 2 2 2 2 1 1 1 1 12 2 2 2 2 2 1 1 1 13 2 2 2 2 2 1 1 1 13 2 2 2 2 2 2 1 1 13 2 2 2 2 2 2 2 1 13 3 2 2 2 2 2 2 1 13 3 2 2 2 2 2 2 2 13 3 3 2 2 2 2 2 2 13 3 3 2 2 2 2 2 2 23 3 3 3 2 2 2 2 2 23 3 3 3 3 2 2 2 2 2

Page 45: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

45

ASN as a Function of Total Available Bandwidth

Pf = Pm = 10e-5, 10 sensors with sigma=[1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]

Page 46: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

46

Time-dependent Cost Formulation

• SPRT cost:

where c(k) is a time-dependent cost per-digit

• Determine B* that minimizes C. Also, find bandwidth distribution along with quantizer parameters

ASN

kkcBC0

* d)(

Page 47: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

47

Time-dependent Cost as a Function of Total Bandwidth

Pf=Pm=10e-5, 10 sensors with sigma = [1, 1.1, 1.2, 1.3, 1.4, 1.5, 1.6, 1.7, 1.8, 1.9]

Page 48: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

48

Future Work (2)

• Improved accuracy– Renewal theory

• Dynamic environment– Dynamic bandwidth allocation in distributed

sensor networks– Sensor selection

• Multiple hypotheses—classification and recognition

Page 49: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

49

Temporal Fusion in Multi-Sensor Target Tracking Systems

Page 50: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

50

Key Issues

• How does the estimation uncertainty evolve temporally?

• What are the effects of the asynchronous sensors on tracking system performance?

• Can we benefit by using asynchronous sensors? • If so, how can we design asynchronous or

temporal staggering pattern to maximize the benefit?

Page 51: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

51

Synchronous vs. Asynchronous Measurement Patterns

For a multi-sensor tracking system, sensors can be either synchronous or asynchronous (temporally staggered)

T: Sampling interval of synchronous sensors

T1: Time difference between sensor 1 and sensor 2 in asynchronous-sensor case

T=T1+T2

Page 52: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

52

Estimation Error as a Function of Time

The system with temporally staggered sensors is a better choice when the major concern is to keep maximum prediction error or average estimation error low

Page 53: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

53

System Performance Metrics

To capture the system performance over time, we construct a family of metrics. The average error variance, AEV is defined as

where w(t) is a weighting function which satisfies

and V(t) is the estimation error variance at time t

TkkTdttwtVAEV)1()()(

Tk

kTdttwtVAEV

)1()()(

1)()1(

Tk

kTdttw

Page 54: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

54

Two Special Cases of AEV• At the time of Observation:

• Averaged over time:

: AEV for position estimation : AEV for velocity estimation

TkkTdttwtVAEV)1()()(

0

1 1

)(otherwise

)T(ktkTifTtw

otherwise

kTtiftw

0

1)(

pAEV

vAEV

Page 55: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

55

AEV vs. Staggering Interval Length

Page 56: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

56

Optimal Staggering Pattern

• To get the lowest AEV, we numerically calculate steady state covariance matrices and use optimization techniques.

• We find it is best to uniformly stagger sensors with same measurement noise variances.

• For sensors with same measurement noise variances, we analytically prove that the and of the system with uniformly staggered sensors always outperform those of the system with synchronous sensors.

pAEV vAEV

Page 57: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

57

AEVP vs. Target Maneuvering Index

measures the degree of elusiveness of the target to be tracked.

Page 58: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

58

AEVV vs. Target Maneuvering Index

Page 59: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

59

Staggering Time for Minimum AEVP for Two Heterogeneous

Sensors

r : the ratio between the two sensors’ measurement noise variances

Page 60: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

60

Staggering Time for Minimum AEVV for Two Heterogeneous

Sensors

Page 61: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

61

Future Work (3)

• Investigate the optimal staggering pattern for systems with more than two sensors with different measurement noise variances.

• Take into account the false alarms and missed detections.

• Study the effect of staggered sensors in multiple-target scenarios.

Page 62: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

62

Uncertainty Computation and Visualization

Page 63: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

63

Particle Movement Model

• Uncertainty in initial position, direction and speed

• Uncertainty modeled by Gaussian distribution

• Joint work with Suresh Lodha of UCSC

Page 64: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

64

Constrained Target Tracking

Page 65: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

65

Future Work (4)• Ground target

– Limited speed– Low Maneuverability– On road or in the open field– Road junctions – Varying obscuration conditions (tunnels, hills, etc.)

• Tracking algorithm– Constrained vs. unconstrained problem– Particle filter (sequential Monte Carlo method)

• Uncertainty in terms of covariance matrices• Joint work with Christian Fruh and Avideh Zakhor: Using

constrained tracking techniques, digital road maps and aerial photographs to improve the localization of a moving vehicle in a city.

Page 66: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

66

Some Technical Outreach Activities

• Collaborative project on information fusion, visualization, and integrated display systems– Andro Consulting Services, Rome, NY.– AFRL, Information Directorate– The NYS Center for Advanced Technology (CAT) in

Computer Applications and Software Engineering (CASE)

• Technical exchange with the Decision Fusion MURI– Alan Willsky, MIT– Sanjeev Kulkarni, Princeton

Page 67: Temporal Uncertainty Computation, Fusion, and Visualization in Multisensor Environments

67

Concluding Remarks

• Highlights of accomplishments– Decision making with aging observations in probabilistic networks– Temporal sensor staggering in multi-sensor target tracking

• Plans for next year– Information fusion for heterogeneous sources in dynamic

environments– Uncertainty computation models and algorithms– Collaborative research

• Uncertainty visualization with UCSC• Estimation and tracking with UCB • Mobile visualization and experimentation with Ga Tech and USC• Information fusion with MIT and Princeton