3D Imaging with ToF Camera - khu.ac.krcvlab.khu.ac.kr/CVLecture24.pdf · Tof-to-Left Occlusions:...

Post on 25-May-2020

22 views 0 download

Transcript of 3D Imaging with ToF Camera - khu.ac.krcvlab.khu.ac.kr/CVLecture24.pdf · Tof-to-Left Occlusions:...

3D Imaging with ToF Camera

Time-of-Flight Principle

Reflected IR shows phase delay proportional to the distance

from the camera.

Time-of-flight of Light Distance

: It is not simple to measure the flight time directly at each pixel

of any existing image sensor

Phase Delay Measurement

Q1 through Q4 are the amount of electrons measured at each

corresponding time.

In real situations, it is difficult to sense electric charge at certain

time instance

)(dt

Phase Delay Measurement

Distance

21

43arctan2

)(2 QQ

QQcdt

c

21

43

21

43 arctan2

arctan2 qq

qqc

qq

qqc

Assumption: Single reflected IR signal

In principle, amplitude of the reflected IR does not affect the depth

calculation.

Multiple IR Signals

- Large Sensor Pixel

- Scattering

- Multipath

- Motion Blur

- Transparent Object

In real situations, multiple reflected IR signals with different phase

delays & amplitudes can be superposed.

)()(

)()(arctan

2)(

2211

4433

qqqq

qqqqcdt

We do not know how many IR signals will be superposed.

Large Sensor Pixel

In order to increase sensitivity,

- large pixel size or pixel binning

IR signal #1

IR signal #2

)()(

)()(arctan

2)(

2211

4433

qqqq

qqqqcdt

Multiple light reflections between the lens and the sensor

Light Scattering

Light scattering [1]

[1] “Real-time scattering compensation for time-of-flight camera”, CVS07

Multipath Errors

IR LED

Sensor

Multipath Interference Depth error in concave objects

)()(

)()(arctan

2)(

2211

4433

qqqq

qqqqcdt

Motion Blur

Moving camera/object within single integration time make wrong

depth calculation

Image sensor

Moving Object Moving Object

Motion Blur

The characteristic of Tof motion blur is different from color

Overshoot Blur

Undershoot Blur

Overshoot Blur

Motion Blur

We use a set of cycles for depth calculation

In motion blur case, multiple IR signals come in sequentially

Reflected IR #1 & #2

TimeInteg.

1Q

2Q

3Q

4Q

Emitted IR

))1(())1((

))1(())1((arctan

2)(

2211

4433

qnnqqnnq

qnnqqnnqcdt

ToF Deblurring

Blur Detection

Blur Level

Input Depth

Deblurred Depth

- There are some relations between Q1~Q4 1Q

2Q

3Q

4Q

4321 QQQQ KQQQQ 4321

- We assume 2-Layer blur case

: single flat foreground + single flat background

Transparent Object

2-Layer approximation of transparent object

))1(())1((

))1(())1((arctan

2)(

2211

4433

qqqq

qqqqcdt

- Sometimes 2-Layer is not enough

- Multiple reflection between objects (when they are close)

- In most cases, they have specular surface

Transparent Object

)(

)(arctan

21

43

QQ

QQtd

)ˆˆ()(

)ˆˆ()(arctan

2121

4343

QQQQ

QQQQtd

Depth

IR-Intensity

Transparent object

Now, amplitude matters

Transparent Object

)ˆˆ()(

)ˆˆ()(arctan

2121

4343

QQQQ

QQQQtd

Transparent Object

Due to the variation of the number of collected electrons during the integration time the repeatability of each depth point varies

Integration time-related Error

Integration Time: 30(ms) Integration Time: 80(ms)

Due to the non-uniformity of IR illumination and reflectivity variation of objects use a polynomial fitting model

Amplitude-related Errors

Amplitude image of a planar object

with a ramp image. Parts of the ramp

are selected for calibration (blue

rectangle).

The depth samples (blue) and the

fitted model (green) to the error

x(pixel)

y(p

ixe

l)

Amplitude

Err

or(

m)

0

0.001

0.003

0.004

0.002

0

1 0.5

Light attenuates according to the law of inverse square

Amplitude Correction

Distance-based intensity correction [18]

Kinect Principle (1/3)

Basically, it is based on structured light principle

IR Speckle

Pattern

Kinect Principle (2/3)

0. Calibrate source and

detector

1. Known IR pattern is

projected from the source

2. Detector identify each

dot (or set of dots)

3. Triangulate to calculate

depth

Kinect Principle (3/3)

- Random speckles identify x,y locations

- Orientation and shape of the speckles change along distance

identify z location

ToF vs Kinect

Kinect Fusion SAIT & KAIST using ToF Camera

3D Reconstruction using multiple depth images

24

Depth/Point Cloud Processing

3D Features

3D Filtering

Registration Surface Processing

Depth Distortion Upon Materials

• Conventional approaches assume the Lambertian materials.

• Various surface materials exhibit the complex light interaction, causing the non-linear distortion on light transport.

• Depth cameras suffer from the depth distortion upon material properties.

• The type of distortion varies upon the sensing principle of depth cameras.

25

Depth Cameras

• We provide the distortion analysis based on two sensor types: A Time-of-Flight and a structured light sensor

[Swissranger] [Kinect]

26

Depth Distortion – Lambertian

• Material affects the sensing performance (Lambertian)

All existing 3D sensing

techniques are limited to

Lambertian object. Sensor

IR LED

Sensor

Projector

ToF depth camera Structured light depth camera

27

Depth Distortion – Specularity

• Non-Lambertian materials causes the failure in sensing reflected signal (Specularity)

Sensor

IR LED

Sensor

Projector

ToF depth camera Structured light depth camera

28

Depth Distortion – Translucency

• Non-Lambertian materials causes the failure in sensing reflected signal (Translucency)

Sensor

Projector

Sensor

IR LED IR LED

ToF depth camera Structured light depth camera

29

Depth Distortion – Global Illumination

• Complex illumination affects the sensing performance (Global Illumination)

Sensor

Projector

Sensor

IR LED IR LED

ToF depth camera Structured light depth camera

30

Depth Error – Specularity

• ToF depth camera

31

Depth Error – Translucency

• ToF depth camera

32

Depth Error – Specularity

• Structured light depth camera

33

Depth Error – Translucency

• Structured light depth camera

34

Depth Error Analysis

35

• Data collection & analysis

Depth Error Analysis

ToF Sensor

36

Depth Error Analysis

Kinect Sensor

37

Color-Depth Calibration

Given a calibrated TOF-Stereo system

• Each TOF point PT defines a correspondence between PL and PR

Correspondences (samples) obtained by using the calibration parameters

• each correspondence comes from a TOF point

• different color -> different depth

Correspondences (samples) obtained by using the calibration parameters

• each correspondence comes from a TOF point

• different color -> different depth

TOF-to-Left Mapping

• We use the left image as reference

TOF-to-Left Mapping

Resolution mismatch

Left-to-Tof Occlusions

TOF-to-Left Mapping

Left-to-Tof Occlusions: the depth decreases from left to right

Tof-to-Left Occlusions

TOF-to-Left Mapping

Tof-to-Left Occlusions: the depth increases from left to right

Point Cloud filtering

• We reject points in left-to-tof occluded area

• We keep the minimum-depth points in case of overlap (due to Tof-to-left occlusions)

Disparity Map: Initialization

• Run Delauney-Triangulation on low-resolution point cloud

Disparity Map: Initialization

• Run Delauney-Triangulation on low-resolution point cloud…

• …and initialize the stereo disparity map