Use of 3D Imaging for Information Product Development David W. Messinger, Ph.D. Digital Imaging and...

Post on 29-Jan-2016

217 views 0 download

Tags:

Transcript of Use of 3D Imaging for Information Product Development David W. Messinger, Ph.D. Digital Imaging and...

Use of 3D Imaging for Information Product Development

David W. Messinger, Ph.D.

Digital Imaging and Remote Sensing Laboratory

Chester F. Carlson Center for Imaging Science

Rochester Institute of Technology

Feb. 7, 2008

2

RIT LADAR Research Areas

Laser RadarSystem Simulation

Assisted SceneConstruction

LADARData Exploitation

System PerformanceTrade Studies

System TaskingTrade Studies

Algorithm/ExploitationTesting

LADAR 3DData Sets

MSI Data Sets

HSI Data Sets

Scene Model

LADAR FeatureExtraction

LADAR & MSI/HSIFusion

LADAR+HSITarget Detection

NURI: Semi-Automated DIRSIG Scene Construction

DIRSIG

3

IMINT versus MASINT

• Traditional Data Viewers– “Fusion” of 2D imagery and 3D

point data.– 3D “fly around” and basic

geometric measurements

• Featured Based Visualization–Visualize rich data

descriptions extracted from 2D imagery and 3D data sets

–Potential to render under different modalities, at different times of day

–Ability to perform signature analysis techniques because of availability of spectral information

Image courtesy Merrick & Company, Copyright 2004

2005 Ford Explorer, red paint (spectral

reflectance available)

4

Semi-Automated Process for Scene Generation

CoarseRegistration

Initial Tree/BuildingSegmentation

DIRSIG Scene Description

TerrainExtraction

TreeReconstruction

BackgroundFeature Maps

Refined BuildingReconstruction

SpectraRetrieval

Refined Tree/Building Segmentation

RefinedRegistration

SpectralAssignment

Coarse Building Analysis

3D Data Sets

MSI Imagery

HSI Imagery

5

Color Visualization of a Small Scene

• Visualization of a 3D scene model that was automatically generated from 3D and 2D data sources.

– scene model can be visualized in other wavelengths, from other angles, at different times of day, with different atmospheres, etc.

– situational awareness, operational planning, etc.

Real Imagery

Quick-Look Color Simulation

6

Information Products Available

• Terrain extraction and object characterization techniques

• Techniques for automated plane extraction and cultural 3D object reconstruction.– building recognition, segmentation, extraction and reconstruction.

• Approaches to 3D object matching and filtering– spin images, generalize ellipsoids, etc.

– techniques for automated tree finding and size estimation

• Approaches to 3D data to 2D image registration

• Approaches to 3D model to 2D image registration

7

Passive Imagery

3D model derived from LADAR data

Project the 3D model onto the

2D image

3D model overlaid on 2D imagery

3D Model and 2D Image Registration

8

Potential for 3D Object Change Detection

• There are objects in the image that are not in the model

• Were they missed by the sensor that created the model or “added”?

• Potential exists for object change detection based on shape detection methods

– Spin Images, described later

9

Potential Applications (Not Yet Developed)

• Trafficability and Lines of Communication (LOC)– Potential to semi-automatically detect roads (with occlusions), paths,

pipes, etc.

– Estimate density of wooded areas and trafficability by vehicles.

• 3D based change detection and component dissection

• Line of sight analyses

• Path forward to tie 3D models to process models?– Both natural processes and man-made processes.

• Improved MSI and HSI atmospheric compensation– 3D feature extraction can improve relative solar angle estimation.

Fusion of LADAR and HSI for Improved Target Detection

Michael Foster, Ph.D. (USAF)

John Schott, Ph.D.

David Messinger, Ph.D.

11

Physics-Based Target Detection Algorithms for HSI

• Approach leverages knowledge of the physics of the observable quantities to improve target detection under difficult observation / target state conditions– targets under varying illumination

– targets with variable “contrast”

– targets with modified surface properties

• General methodology– develop a physics-based model to predict the manifestations of the

target observable signature

– include known sources of variability

– detect for family of signatures, called a “target space”

• Applied to detection in reflective and emissive spectral regimes

12

“Traditional” Target Detection

Scene Image

radiance space

target detectiontarget property

probability map

atmospheric compensation / TES

target space

target domain image domain

13

Physics-Based Signatures Detection

Scene Image

radiance space

target properties

physics-based model

target detection

probability map

target manifestati

ons

radiance space

target domain image domain

14

Physics-Based Detection of Surface Targets in Reflective HSI

• Physics Based Structured InFeasibility Target-detector (PB-SIFT)– Work of Emmett Ientilucci under IC Postdoctoral fellowship

– Physics Based Orthogonal Subspace Projection (PBosp)

– Structured Infeasibility Projector (SIP)

• Overview– Variability in target signature is due to atmospheric contributions and

target illumination

– Captures variability in target space using endmembers

– Can isolate pixels that have a significant projection but are not target

15

Addition of LADAR Information

• Physics-based forward modeling techniques for target detection typically use radiometric variability to describe the target manifestations possible

• Generally ignore or over-model geometric terms in the forward model

• IF WE HAD– co-temporal, co-registered LADAR & HSI

– oversampled LADAR

• CAN WE– use these data to constrain the geometric terms in the forward

model and improve target detection?

16

Sub-pixel Target Radiometric Model

• Predicts spectral radiance at a sensor based on mixture of target and background spectra for a specific atmosphere and geometry

• Inherent geometric terms

• Shadowing term – K

• Incident illumination angle –

• Downwelled shape factor – F

• Pixel purity – M

17

Physics-Based Signatures Detection

Scene Image

radiance space

target properties

physics-based model

target detection

probability map

target manifestati

ons

radiance space

target domain image domain

includes geometric information to constrain the model parameter space

18

LADAR 3D Point Cloud Processing

• Shadow estimate – K– Shadow feeler

• Incident illumination angle – – Extract points associated with LADAR ground plane

– Estimate point normals using eigenvector analysis

– Calculate angle between point normals and solar direction

• Downwelling shape factor – F– Assume clear sky and use LADAR skydome feeler technique

• Pixel purity – M– Spin-image techniques to identify probable LADAR target points

– Subject to high false alarms

• Project point data into HSI FPA to create pixel maps

19

Microscene Spectral Data - DIRSIG Simulation

1. Gray Humvee

2. Calibration panel

3. Gray SUV

4. Gray shed

5. Red sedan

6. Red SUV

7. Gray SUV under tree

8. Inclined gray SUV

9. Inclined gray Humvee

high spatial resolution for visualization only

20

Microscene Spectral Data - DIRSIG Simulation

• Spectral cube has 1.0 m GSD

• Spatially oversampled producing mixed pixels

• 0.4 – 1.2 m

RGB of cube used in processing

21

Microscene Spatial Data - DIRSIG Simulation

3D LADAR POINT CLOUD NADIR VIEW

3D LADAR POINT CLOUD OBLIQUE VIEW

Post spacing of approximately 40 cm, with and without quantization and pointing error

22

Feature Maps - Estimate of K

SHADOW MAP

Note that shadows “line up” with trees in RGB image

23

24

Feature Maps - Estimate of

ILLUMINATION ANGLE MAP

Illumination angle map for terrain (after tree removal)

25

Feature Maps - Estimate of F

SHAPE FACTOR MAP

Note full sky view on tops of trees and near zero sky visibility for ground surrounded by trees

26

Target Detection in 3D Point Cloud: Spin-Images

• 2D parametric space image

• Capture 3D shape information about a single point in 3D point cloud

• Pose invariant – Based on local geometry relative to a single point normal

– i.e., invariant to tip, tilt, pan

• Scale variant– Estimate scale from ground plane/sensor position

• Graceful detection degradation in the presence of occlusion

27

Spin-Image Formation: Surface Point Coordinate Transformation

• 2D parameter space coordinates – Radial distance to

local point normal– Signed vertical

distance along basis normal

28

Spin-Image Examples

• 3 spin image pairs corresponding to 3 different points on the model

• Left image is high resolution spin image (small bin size)

• Right image is low resolution image (larger bin size) post bilinear interpolation

29

Spin Image Geometric Target Detection

3D target model

for all surface points, construct

spin image

3D image data

for all data points, construct spin image

identify points in image data that have high correspondence

to a library model

30

Library Matching Issues

• Spin image library generated from 3D model– Points on all sides of model

– High sampling density

– No occlusion

• Spin images generated from the scene– Target and background present

– Points only from LADAR illumination direction

– Self-occlusion and background occlusion

– Not necessarily at same spatial sampling as model library

31

Spin Image Library Matching

• Intelligent model library generation

• Typically model has many more points than scene data– Normalize scene and model spin images

• Scene has points from only one direction– Spin angle limits model points that can contribute to a spin image

when building model library

– Compute normal for every point in model

– Pick spin image basis point p

– Allow only normals within 90° angle relative to spin image basis normal to contribute to model spin image

– This builds self-occlusion effects into model library

32

Feature Maps - Estimate of M

PIXEL PURITY MAP

Results from spin-image detection of geometric target model

33

34

Creating the Target Space

35

Multi-Modal Target Detection Methodology & Advantages

• Only those pixels on the focal plane that are likely to contain a target, as determined by the 3D geometric target detection algorithm, are interrogated on the HSI focal plane

– potential dramatic reduction in FAs based on the geometry information

• Spectral “background” information is derived from the pixels most likely to not contain the target (again, based on LADAR data)

• Per pixel, the physics-based target space is “customized” for the specific geometric conditions in that pixel

• “Fusion” occurs in the following sense:– the geometric information, derived from the LADAR data, influences

how we exploit the HSI

36

Target Detection Results

• All features with gray paint have high scores

• Even the hidden SUV

• Detection statistic only calculated for those pixels with M > 0.3

• Threshold at 0.2 eliminates all false alarms

note the missing calibration panel with the actual target reflectance

37

38

(Partial) Application to Real LADAR Data

• Leica LADAR collection of RIT campus

• Coverage of the simulated area

• Truck parked on the “berm” in the scene

• No co-temporal hyperspectral imagery

• Point cloud processing schemes applied to real data

39

Real Point Cloud Processing Results

shadow map

shape factor map

illumination angle map

pixel purity map

40

Summary

• Demonstration of the feasibility of improving HSI target detection through the use of LADAR information products

• LADAR was used to derive / estimate:– shadowing effects

– downwelling illumination factor

– target likelihood based on geometric target model

– sub-pixel mixing fraction

– direct illumination angle

on a per-pixel basis in the HSI focal plane

• Estimation of other information products possible with existing tools designed to enhance scene building capabilities

Questions?

David W. Messinger, Ph.D.

messinger@cis.rit.edu

(585) 475-4538

Back Up Charts