SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for...

22
A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications S YED A ZER R EZA , 1 MARCO L A MANNA , 1 , AND A NDREAS V ELTEN 1,2,* 1 Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, 1300 University Avenue, Madison, WI, 53706, USA 2 Department of Electrical Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA * [email protected] Abstract: Non-line-of-sight (NLOS) imaging has recently attracted a lot of interest from the scientific community. The goal of this paper is to provide the basis for a comprehensive mathematical framework for NLOS imaging that is directly derived from physical concepts. We introduce the irradiance phasor field (P-field) as an abstract quantity for irradiance fluctuations, akin to the complex envelope of the Electrical field (E-field) that is used to describe propagation of electromagnetic energy. We demonstrate that the P-field propagator is analogous to the Huygens-Fresnel propagator that describes the propagation of other waves and show that NLOS light transport can be described with the processing methods that are available for LOS imaging. We perform simulations to demonstrate the accuracy and validity of the P-field formulation and provide experimental results to demonstrate a Huygens-like P-field summation behavior. © 2020 Optical Society of America 1. Introduction 1.1. Background In an optical line-of-sight (LOS) imaging scenario, such as the one depicted in Fig. 1a, the goal is to reconstruct an image of a target that is in the direct path of a laser source and a camera. Conversely, in an optical non-line-of-sight (NLOS) imaging scenario, the goal is to reconstruct an image of a target that is not in the direct path of the transmitter and receiver. Considering the scene shown in Fig. 1b it has been shown [16] that 3D NLOS image recovery can be achieved by illuminating a relay surface in the visible scene (i.e. a relay wall) and collecting light reflected from the object via the relay surface. In Fig. 1b, a light pulse generated by the laser L, incident on a relay wall at point p, subsequently scatters from the wall in all directions with a fraction of the photons reflected from the relay wall reaching the target. A fraction of the photons reflected from the target travels back to the wall. The ultra-fast camera, focused at location q on the relay wall, measures the photon flux from the target reflected at q as a function of time. Reconstructing a 3D image of the hidden object is an inverse light transport problem. Different light transport models describe the propagation of light through a scene (for example, in [7]) and enable us to infer about the scene by analyzing the data captured by a camera [8]. Previous approaches have used ray optics and attempted to model light propagation through a scene as a linear operator that can be inverted with a variety of inverse methods [1,2,917]. Non-linear inverse methods for more complex scenes have also been proposed [5,18,19], but the added level of complexity makes their application challenging. Questions regarding null-spaces, attainable resolutions and contrast and dealing with multiple reflections in the hidden scene have also been discussed in prior art. For example, the role of the BRDF and effects of null-spaces have been discussed in [19] and [20] respectively. Model complexity and inaccurate modeling of real light transport pose a great challenge for conducting more fundamental analyses on NLOS imaging. In some of the recent works [21,22], a wave propagation-based P-field approach for NLOS 1 arXiv:1802.01823v2 [physics.optics] 2 Jun 2020

Transcript of SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for...

Page 1: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

A Physical Light Transport Model forNon-Line-of-Sight Imaging Applications

SYED AZER REZA,1 MARCO LA MANNA,1 , AND ANDREASVELTEN1,2,*

1 Department of Biostatistics and Medical Informatics, University of Wisconsin-Madison, 1300 UniversityAvenue, Madison, WI, 53706, USA2Department of Electrical Engineering, University of Wisconsin-Madison, Madison, WI, 53706, USA*[email protected]

Abstract: Non-line-of-sight (NLOS) imaging has recently attracted a lot of interest fromthe scientific community. The goal of this paper is to provide the basis for a comprehensivemathematical framework for NLOS imaging that is directly derived from physical concepts. Weintroduce the irradiance phasor field (P-field) as an abstract quantity for irradiance fluctuations,akin to the complex envelope of the Electrical field (E-field) that is used to describe propagationof electromagnetic energy. We demonstrate that the P-field propagator is analogous to theHuygens-Fresnel propagator that describes the propagation of other waves and show that NLOSlight transport can be described with the processing methods that are available for LOS imaging.We perform simulations to demonstrate the accuracy and validity of the P-field formulation andprovide experimental results to demonstrate a Huygens-like P-field summation behavior.

© 2020 Optical Society of America

1. Introduction

1.1. Background

In an optical line-of-sight (LOS) imaging scenario, such as the one depicted in Fig. 1a, the goalis to reconstruct an image of a target that is in the direct path of a laser source and a camera.Conversely, in an optical non-line-of-sight (NLOS) imaging scenario, the goal is to reconstructan image of a target that is not in the direct path of the transmitter and receiver. Considering thescene shown in Fig. 1b it has been shown [1–6] that 3D NLOS image recovery can be achievedby illuminating a relay surface in the visible scene (i.e. a relay wall) and collecting light reflectedfrom the object via the relay surface. In Fig. 1b, a light pulse generated by the laser L, incidenton a relay wall at point p, subsequently scatters from the wall in all directions with a fraction ofthe photons reflected from the relay wall reaching the target. A fraction of the photons reflectedfrom the target travels back to the wall. The ultra-fast camera, focused at location q on the relaywall, measures the photon flux from the target reflected at q as a function of time.

Reconstructing a 3D image of the hidden object is an inverse light transport problem. Differentlight transport models describe the propagation of light through a scene (for example, in [7])and enable us to infer about the scene by analyzing the data captured by a camera [8]. Previousapproaches have used ray optics and attempted to model light propagation through a scene as alinear operator that can be inverted with a variety of inverse methods [1, 2, 9–17]. Non-linearinverse methods for more complex scenes have also been proposed [5,18,19], but the added levelof complexity makes their application challenging. Questions regarding null-spaces, attainableresolutions and contrast and dealing with multiple reflections in the hidden scene have also beendiscussed in prior art. For example, the role of the BRDF and effects of null-spaces have beendiscussed in [19] and [20] respectively. Model complexity and inaccurate modeling of real lighttransport pose a great challenge for conducting more fundamental analyses on NLOS imaging.In some of the recent works [21,22], a wave propagation-based P-field approach for NLOS

1

arX

iv:1

802.

0182

3v2

[ph

ysic

s.op

tics]

2 J

un 2

020

Page 2: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b)

Fig. 1. A LOS scenario, shown in (a), is characterized by a target (a sphere) that is in directpath of both the transmitter (laser) and the receiver (camera). In a NLOS scenario, shown in(b), the target is hidden by an occluder from both the transmitter and the receiver.

imaging has produced excellent image reconstruction results of hidden scenes. Experimentalresults in [21] demonstrate the benefits in treating an NLOS imaging system as a virtualcamera-based P-field imaging system. The goal of this paper is to describe ToF NLOS lighttransport using a wave propagation model akin to those governing other imaging methods.Here, we describe the fundamental mathematical foundations of P-field summation for imagingapplications, the limitations of the P-field NLOS imaging approach and the requisite assumptionsand approximations for the validity of the P-field model.The newly introduced P-field approach denotes the complex envelope of the average optical

irradiance. In this paper we show that propagation of P-fields from the virtual aperture at therelay wall to a virtual sensor behind the wall can be modeled –analogous to the Huygens’ integral–as the propagation of wave-like P-field wavelets contributions from the aperture plane to thedetector plane. With the proposed phasor field (P-field) (A phasor representation of radiancewas discussed in [23], where the authors propose a framework to analyze the light transport incorrelation-based TOF ranging) formalism, we show that NLOS imaging can be treated similarlyto LOS imaging.

With the aid of simulation results, we also demonstrate the effect of aperture roughness (hereroughness refers the material roughness of the relay wall, which we treat here as the P-fieldvirtual aperture) on the accuracy of the amplitude and phase estimates in the detector plane –obtained through the P-field integral which involves a summation of P-field contributions fromthe aperture. We also present preliminary experimental results where we implemented a P-fieldinterferometer and measured the change in P-field signal amplitude by changing the path lengthof one interferometer arm while keeping the path length of the other reference arm fixed.

2. The P-field Imaging Approach

2.1. The P-field Integral

To explain what we mean by a Huygens’-like integral describing the propagation of P-fields, let usconsider Fig. 2 which describes the propagation of various E-field spherical wavelet contributionsfrom an aperture planeA to a detection plane Σ separated by an arbitrary distance z. The Green’sfunction-based solution of the wave equation describes the propagation of scalar E-field waveletcontributions from each location (x ′, y′, 0) ∈ A to any particular location (x, y, z) ∈ Σ with theresulting scalar E-field E(x, y, z) at (x, y, z) described as a linear sum of these E-field waveletcontributions. The Huygens’ integral is given by [24, 25]

2

Page 3: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b)

Fig. 2. Analogy between E-field and P-field propagation: Scalar E-field contributionsE(x′, y′, 0) (top) and P-field contributions (bottom) from all generic locations (x′, y′, 0)on an aperture plane A to a location (x, y, z) on an observation plane Σ where A and Σare separated by a distance z and any pair of locations (x′, y′, 0) ∈ A and (x, y, z) ∈ Σ areseparated by a distance |r |.

E(x, y, z) = KE

∬A

τ(x ′, y′, 0) E0(x ′, y′, 0)e jK |r |

|r | χ dx ′dy′. (1)

In (1), KE = 1/ jλE is the E-field proportionality coefficient, τ(x ′, y′, 0) = t0(x ′, y′, 0)e j∆φK (x′,y′,0)

and E0(x ′, y′, 0) are the E-field transmission function and the amplitude of the E-field contributionfrom location (x ′, y′, 0) ∈ A respectively, K = 2π/λE is the E-field wavenumber, λE is the E-fieldwavelength, |r | is the Euclidian distance between a single location (x ′, y′, 0) ∈ A, (x, y, z) ∈ Σexpressed as |r | =

√(x − x ′)2 + (y − y′)2 + z2, and χ is the obliquity factor; hereon we assume

that χ = 1. Moreover, in (1),

E = t(x ′, y′, 0)E0(x ′, y′, 0)e jK |r |

|r | , (2)

is defined as an E-field spherical wavelet contribution from (x ′, y′, 0) ∈ A.In our case, the optical irradiance is modulated by a time harmonic function P(t) (which we

refer to as the P-field signal) with frequency Ω and a corresponding wavelength λP . Let usassume that the following condition on the coherence length lc of light holds true

lc λP. (3)

Moreover, let2πω T 2π

Ω, (4)

where the detector integration time is much longer than the time period of the short optical carrier(E-field) of frequency ω but much shorter than the longer time period of the P-field signal offrequency Ω. We show in the Appendix that we can describe P-field propagation and summationin Σ as the sum of Huygens’-like P-field wavelet contributions from A. Namely, similarly to(1) for E-fields, the temporal evolution of the time-average irradiance can be described by the

3

Page 4: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

P-field integral as a sum of P-field wavelet contributions P(r) from A as

|ITot−F(x, y, z)| = KP

∬A

1|r | P0,Ω(x ′, y′, 0)

e jβ |r |

|r |︸ ︷︷ ︸P(r)

χ dx ′dy′, (5)

where P(r) represents the phasor form of a signed, real, time harmonic, monochromatic P-fieldcontribution P(r, t) stated as P(r, t) = Re[P(r)e jΩt ]. In (5), |ITot−F(x, y, z)| is the sum ofthe signed envelopes of optical irradiance at (x, y, z) described as the sum of P-field waveletcontributions from all locations (x ′, y′, 0) ∈ A with respective amplitudes of P0,Ω(x ′, y′, 0), KPis the P-field proportionality coefficient, β is the P-field wavenumber expressed either in termsof the P-field wavelength λP or its frequency Ω (where n denotes the refractive index of themedium between A and Σ, and c denotes the speed of light in vacuum) as

β =2πλP=

nΩc. (6)

In other words P(r, t) describes the irradiance at a location (x, y) as a function of time t anddistance |r | with the static background irradiance component subtracted allowing P(r, t) to be asigned quantity. Just like the E-field, the P-field P(x ′, y′, z′, t) at any location (x ′, y′, z′) can beexpressed as a superposition of monochromatic phasor components:

P(x ′, y′, z′, t) =∫ ∞

−∞PΩ(x ′, y′, z′, t)dΩ (7)

wherePΩ(x ′, y′, z′, t) = P0,Ω(x ′, y′, z′)e−iΩt is amonchromaticP-fieldwave andP0,Ω(x ′, y′, z′) =∫ ∞−∞ P(x

′, y′, z′, t)e jΩtdt is the Fourier domain representation of P(x ′, y′, z′, t).Analogous to the complex E-field, PΩ(x ′, y′, z′, t) is symmetric about the origin Ω = 0

resulting in P(x ′, y′, z′, t) to be always real. In practice we usually omit the negative frequencycomponent and compute only the complex phasor, keeping in mind that the actual phasor fieldintensity is given by the sum of the positive and negative frequency components (i.e. thephasor and its complex conjugate). Note also that other than the electric field, the intensitycannot be negative which only affects the phasor at PΩ=0. This static component of the fieldcan usually safely be ignored. We can also subtract it and consider only the fast variations:P′Ω(x ′, y′, z′, t) = PΩ(x ′, y′, z′, t) − PΩ=0(x ′, y′, z′, t). In the following we consider a single

monochromatic component PΩ(x ′, y′, z′, t) and this distinction is of no consequence.To improve readability and maintain clarity throughout the manuscript, a complete derivation

of the P-field integral is presented in Appendix – where we provide details of all requisiteapproximations and assumptions about aperture roughness, coherence length and detectorintegration time, which allow us to arrive at the result in (5). Notations used for describingquantities in the E-field Huygens’ integral and the Huygens’-like P-field integral are summarizedin Table 1. Moreover, to avoid confusion, all symbols used throughout the manuscript aresummarized in Table 2.

2.2. Correcting Amplitude Error of P-field Integral

Note that the spherical P-field wavelet contributions in (5) are not the only term in the P-fieldintegral argument as an additional 1/|r | term multiplies with each corresponding P-field waveletcontribution term. For cases when either of the respective far-field (Fraunhofer) or the near-field(Fresnel) approximations in (A.35) and (A.36) hold, (for a chosen P-field wavelength λP), wecan assume that 1/|r | ≈ 1/z. For these cases, (5) can be expressed as

|ITot−F(x, y, z)| =KPL

z

∫AP0,Ω(x ′, y′, 0)

e jβ |r |

|r | χdx ′dy′. (8)

4

Page 5: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Table 1. Comparison of E-field and P-field notations.

Carrier electric field phasor field

Wavelet E(r) = E0(x ′, y′, 0) ejK |r |

|r | P(r) = P0,Ω(x ′, y′, 0) ejβ |r |

|r |PΩ(x ′, y′, t) = P0,Ω(x ′, y′)e−iΩt

P0,Ω(x ′, y′) =∫ ∞−∞ P(x

′, y′, t)e jΩtdtPeak Amplitude E0 P0,Ω

ProportionalityCoeff.

KE KP

Wavelength (m) λE λP

Frequency (Hz) ω Ω

Wavenumber(m−1)

K = 2π/λE = ω/c β = 2π/λP = Ω/c

Table 2. Summary of coefficients and symbols.

Symbol Description

γ (m) Maximum aperture roughness

L Irradiance loss/scattering coefficient

C(x, y, z, |r |) Ultra near-field correction factor

〈η〉 Estimation error for γ ≈ 0

ζ Medium impedance

In the case of ultra near-field imaging scenario where neither of the conditions in (A.35) and(A.36) are satisfied, 1/|r | 0 1/z. In this case simple summation of P-field wavelets wouldlead to an amplitude estimation error (note there is no phase estimation error even for the ultranear-field case as no approximations are made to the phase terms). For the ultra near-fieldcase, the introduction of an amplitude correction factor C(x, y, z, |r |) allows us to correct for theaforementioned amplitude estimation error and enables us to accurately express (8) as a sum ofP-field wavelets

|ITot−F(x, y, z)| =KPLC(x, y, z, |r |)|r(x, y, z)|Av

∫AP0,Ω(x ′, y′, 0)

e jβ |r |

|r | χdx ′dy′. (9)

In (9), the distance

|r(x, y, z)|Av =

√[z2 + (x − 〈x ′〉)2 + (y − 〈y′〉)2] (10)

is the distance of any given location (x, y, z) ∈ Σ from an average location (〈x ′〉, 〈y′〉, 0) inA. Inshort, this location-dependent correction factor allows us to equate∫

A

1|r | P0,Ω(x ′, y′, 0)

e jβ |r |

|r | χdx ′dy′ =C(x, y, z, |r |)|r(x, y, z)|Av

∫AP0,Ω(x ′, y′, 0)

e jβ |r |

|r | χdx ′dy′ (11)

which enables expressing theP-field integral as pure summation ofP-field spherical contributionseven for the ultra near-field scenario. This formulation allows to describe P-field propagation

5

Page 6: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

from an initial aperture plane to a final imaging plane analogously to how the Huygens’ E-fieldintegral completely describes E-field propagation.

It also has to be noted that we defined the P-field as a single frequency (monotonic) functionbut it was only done to simplify our mathematical treatment. The P-field imaging integral iscorrect and applicable even if P-fields represent more complex irradiance fluctuations. Througha spectral decomposition such as a Fourier series representation, any P-field signal with amulti-frequency spectral composition can be simply expressed as a linear summation of singlefrequency contributions.We can also calculate the approximate mean percentage error 〈η〉 (for m × n total (x,y)

locations in Σ) between the true amplitude corrected estimate of |ITot(x, y, z)| from (5) and thecorresponding uncorrected P-field estimate from (8) (assuming no aperture roughness) for anyseparation distance z between A and Σ. 〈η〉 is given by

〈η〉 =

1m

1n

m∑u=1

n∑v=1

∬A 1|ru,v |P0,Ω

e jβ |ru,v |

|ru,v | dx ′dy′ − 1

z

∬A P0,Ωe jβ |ru,v |

|ru,v | dx ′dy′∬A 1

|ru,v |P0,Ωe jβ |ru,v |

|ru,v | dx ′dy′

× 100, (12)

where the subscripts u and v denote the uth x-location and vth y-location ∈ Σ and

|ru,v | =√[z2 + (xu − x ′)2 + (yv − y′)2]. (13)

The definition of 〈η〉 can be modified to replace the 1/z term in (12) by 1/|r(x, y, z)|AV – where|r(x, y, z)|AV was introduced in (10) and denotes the distance of any generic location (x, y, z) ∈ Σfrom an average location (〈x ′〉, 〈y′〉, 0) ∈ A. If the average P-field amplitude estimation error isdefined this way, 〈η〉 = 〈ηAV〉 expressed as

〈ηAV〉 =

1m

1n

m∑u=1

n∑v=1

∬A 1|ru,v |P0,Ω

e jβ |ru,v |

|ru,v | dx ′dy′ − 1

|r(x,y,z) |Av

∬A P0,Ωe jβ |ru,v |

|ru,v | dx ′dy′∬A 1

|ru,v |P0,Ωe jβ |ru,v |

|ru,v | dx ′dy′

× 100.

(14)

3. Phasor Field Simulations

In this section, we present simulation results of the expectedP-field sums at all locations in Σ. Thesurface roughness is assumed to be uniformly distributed between 0 and γ. The computed P-fieldsum for a monochromatic optical carrier modulated by a monotonic RF signal is normalizedto the maximum value of |ITot−F(x, y, z)| (denoted by the subscript ’Norm’) which assumes amaximum surface roughness γ ≈ 0. As the P-field integral in (A.39) assumes negligible surfaceroughness, the accuracy of the P-field estimate is affected for apertures with larger roughness -specially if the roughness is comparable to the P-field wavelength.

The first two simulations (namely Simulation 1 and Simulation 2), performed for the far-fieldand near-field imaging scenarios, solely investigate this effect on P-field distribution estimatewith increasing aperture roughness. In the third and final simulation (Simulation 3), we onlyinvestigate the effect of amplitude estimation error for the ultra near-field scenario. In thissimulation, we assume negligible aperture roughness (i.e., γ ≈ 0) such that the roughness isenough to scatter light but not enough to cause a significant phase shift to the P-field (which isthe underlying assumption for the P-field integral).

6

Page 7: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b)

Fig. 3. Far-field results for Simulation 1: (a) P-field estimate |ITot−F(x, y, z)|Norm atz = 200 m from the P-field integral of (A.39), and (b) estimate |ITot(x, y, z)|Norm from(A.34) with roughness of γ = 12 cm taken into account.

Fig. 4. P-field distribution estimate |ITot−F(x, y, z)|Norm in Σ from (A.39) and its deviationfrom actual P-field distribution estimates |ITot(x, y, z)|Norm from (A.34) at (x, 0, 200 m)withincreasing aperture roughness. Plots shown are for surface roughness values of γ = 3 cm,γ = 6 cm, γ = 9 cm, and γ = 15 cm for a P-field wavelength of λP = 30 cm.

For the first two simulation scenarios, we show that estimates of P-field distributions calculatedthrough the P-field integral in (A.39) remain accurate even for high aperture roughness values andgradually degrade as γ → λP . For Simulation 3, we compare the P-field distribution estimateobtained from the P-field integral in (A.40) without amplitude correction and the P-field integralin (A.34) with the amplitude correction factor applied. We also show that the P-field estimationerror in the absence of significant aperture roughness for the ultra near-field case is only anamplitude error and not a phase estimation error. For each of the three simulations, we set theE-field and P-field wavelengths to λE = 1 µm and λP ≈ 30 cm (corresponding to Ω = 1 GHz).Also, for each simulation, we assume a uniform illuminated aperture with arbitrarily chosendimensions of 8 m × 4 m.

3.1. Simulation 1: Far-Field Imaging Scenario

For this simulation we chose z = 200 m as the mutual separation between Σ and A. The P-fielddistribution |ITot−F(x, y, z)|Norm estimated from the P-field integral in (A.39) is plotted in Fig. 3a.This P-field estimate assumes negligible roughness and hence does not depend on any changesto the aperture roughness. We compare this P-field estimate to the actual P-field distribution

7

Page 8: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

|ITot(x, y, z)|,Norm calculated from (A.34) – which takes into account aperture roughness – for oneinstance of random aperture roughness profile. These P-field distribution calculated from (A.34)for γ = 21 cm is plotted in Fig. 3b.

Comparing the P-field integral estimate to the estimate where aperture roughness is accountedfor, we observe that the P-field integral of (A.39) provides an accurate estimate of the P-fielddistribution in Σ even for high aperture roughness values (such as our case where is of comparabledimension as λP). For the chosen maximum roughness values of γ, the P-field integral providesa reasonably accurate estimate of P-field distribution despite an aperture roughness value as highas 21 cm (70% of the λP). Of course, the P-field integral provides a more accurate estimate forlow roughness values and this estimate gradually degrades as γ → λP.To clearly show how the accuracy of the P-field distribution estimate provided by the P-

field integral is affected by increasing roughness, we plot, in Fig. 4, |ITot(x, y, z)|,Norm and|ITot−F(x, y, z)|Norm only along the x-axis in Σ for y = 0. For four maximum roughness valuesof γ = 3 cm, γ = 6 cm, γ = 9 cm and γ = 15 cm, Fig. 4 assists us to observe that the P-fieldestimate in Fig. 3a digresses from the actual P-field distribution when aperture roughness featuresbecome large enough to be comparable to λP. In other words, additional P-field specular noise,which increases with an increasing aperture roughness, is not taken into account in (A.39) whichaffects the Huygens-like P-field distribution estimate in Σ specially at large γ values. This resultsin an increasing P-field estimation error for increasing aperture roughness.

3.2. Simulation 2: Near-Field Imaging Scenario

In Simulation 2, we repeat Simulation 1 for a near-field scenario with the location of theobservation plane Σ set to 5 m. The dimensions of the aperture as well as the values of λE and λPare the same as in Simulation 1.

The x-y plane P-field distribution for this scenario, computed through the P-field integral in(A.39), is compared to the P-field distributions calculated from (A.34) for aperture roughnessvalues of γ = 2 cm, and γ = 9 cm respectively. These P-field distributions are plotted in Fig. 5.As is the case with the far-field estimates in Simulation 1, the estimate from the P-field integral in(A.39) deteriorates with increasing roughness and because the P-field integral assumes negligibleroughness, it does not account for an increasing P-field specular noise with an increasing apertureroughness.

(a) (b) (c)

Fig. 5. Near-field results of Simulation 2: (a) P-field distribution estimate|ITot−F(x, y, z)|Norm at z = 5 m from the P-field integral in (A.39), and P-field estimates|ITot(x, y, z)|Norm from (A.34) taking into account aperture roughness of (b) γ = 3 cm, andand (c) γ = 9 cm.

8

Page 9: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b) (c)

Fig. 6. Amplitude estimation error in estimates from P-field integral with and withoutamplitude correction for (a) z = 2.5 m, (b) z = 5 m, and (c) z = 200 m.

(a) (b)

Fig. 7. A color map of (a) log〈η〉 and (b) log〈ηAV〉 for 1 m ≥ Z ≥ 50 m in the x-z plane.

3.3. Simulation 3: P-field Amplitude Estimation Error for the Ultra Near-Field ImagingScenario

While the previous two simulations demonstrate a degradation in the P-field distributionestimate from the P-field integral with increasing aperture roughness, the purpose of performingSimulation 3 is to demonstrate an amplitude error incurred by the P-field integral in the ultranear-field scenario even for negligible aperture roughness as was discussed in Section 2.2. It is thisscenario where the introduction of a P-field amplitude correction factor is necessary. Throughthese simulations, we also show that this particular error, i.e., a dynamic scaling error withrespect to the location in Σ, is independent of the phase estimation error which was investigatedin Simulation 1 and Simulation 2.For this simulation, we first assume that the separation between A and Σ is small, such that

neither of the respective Fraunhofer and Fresnel conditions in (A.35) and (A.36) apply. Next,for comparison, we also compute the amplitude error for the near-field and far-field cases ofSimulations 1 and 2. To only observe the magnitude of the error in amplitude estimation,Simulation 3 considered a uniformly-illuminated aperture but with a negligible roughness, i.e.,γ ≈ 0. For the ultra near-field case we set the plane separation to z = 2.5 m while the near-fieldand far-field distances were set again to 5 m and 200 m respectively. For all cases, the uncorrectedP-field distribution was estimated from (8) and compared to the amplitude corrected estimatefrom (5). These amplitude-corrected and uncorrected P-field distributions in Σ along the x-axisat y = 0 are plotted in Fig. 6.

9

Page 10: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b)

Fig. 8. Analogy between a a) classical Michelson interferometer with a movable mirror andb) the P-field interferometer with a movable mirror pair.

From the plots in Fig. 6, we clearly observe that even for an ultra near-field scenario, aP-field distribution estimate from (A.39) without the correction factor only yields an amplitudeestimation error but no phase estimation error as no phase approximations were made throughoutthe derivation of the P-field integral. This is evident from the locations of the minima andmaxima in the P-field distribution in Fig. 6.

We also observe an increasing P-field amplitude estimation error when separation between Aand Σ is decreased and that it is negligible for large separation distances i.e., the Fresnel andFraunhofer cases. This is also depicted in Fig. 7 where we plot a color map of the logarithmicP-field estimation errors 〈η〉 and 〈ηAV〉 in the x-z plane from (12) and (14) respectively for50 m ≥ z ≥ 1 m.

4. Experimental Demonstration

We performed an experiment to verify our claim of P-field summation behavior in the presenceof rough apertures such as NLOS imaging scenarios. This, as was mentioned earlier, is analogousto the Huygens’ summation of E-field wavelets to describe conventional LOS imaging. Toexperimentally demonstrate P-field summation, the setup shown in Fig. 9 was implemented whichenacts the simplest situation of the summation of two optically incoherent P-field contributionsfrom a rough aperture. This experiment is analogous to an interferometry experiment in therealm of E-fields. This analogy is depicted in Fig. 8 where akin to translating a mirror andchanging path length in one arm of a Michelson interferometer, we emulate the shifting of thelight source with the help of a translatable mirror pair where each new location of the mirror pairintroduces a unique additional phase to one beam component resulting in a different P-field sumat a fixed location of the detector.A Gaussian beam from a fiber-coupled laser source exits the optical fiber cable via a Fiber

Collimator (FC). A 50:50 Beam Splitter (BS) splits the propagating beam into two identicalbeams each with exactly half of the original beam power. We refer to these beams as Beam 1and Beam 2. Beam 1 and Beam 2 propagate through path lengths L1 and L2 before incidenceat identical diffusers D1 and D2, separated by a distance DS. The path length L1 serves asthe reference beam path and remains unaltered throughout the course of measurements. Pathlength L2 for Beam 2 is altered by translating a two-mirror assembly comprising mirrors M1and M2 placed on a translation stage. The translation distance DT of the mirror pair is measuredfrom a reference position DT = 0 at which path lengths L1 and L2 are equal. An AC-coupledPhoto-Detector (PD) is positioned at a distance Z away from the plane of the two diffusers Aand equidistant from D1 and D2. Translation of the [M1, M2] mirror pair resulted in altering thepath length L2 for Beam 2 and the consequent phase difference ∆φP = |φ1 − φ2 | between the two

10

Page 11: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Fig. 9. Experimental setup to demonstrate summation of two P-field contributions in Σ fordifferent DT settings.

P-field contributions P1 = |P1 |e jφ1 and P2 = |P2 |e jφ2 simply expressed as a function of DT isgiven by

∆φP =4πλP

DT. (15)

The amplitude contributions are equal, i.e., |P2 | = |P1 | with the use of identical diffusers anda 50:50 BS. Therefore, the expected output signal PSum which the PD outputs expressed as afunction of DT is given by

PSum = |P1 |e jφ1 + |P2 |e jφ1+∆φP = |P1 |(1 + e j 4π

λPDT

). (16)

If we only detect the peak amplitude of PSum at each DT setting, where |PSum |Peak denotes thispeak amplitude of PSum, then

|PSum |Peak = CP

|P1 |e jφ1 + |P2 |e jφ1+∆φP

= CP |P1 |1 + e j 4π

λPDT

, (17)

where CP is simply a coefficient of proportionality. As the maximum possible value in (17) is2CP |P1 |, the normalized magnitude |PNorm | of the P-field sum is given by

|PNorm | =|PSum |Peak2CP |P1 |

=12

1 + e j 4πλP

DT . (18)

As (18) predicts, the two P-field contributions from D1 and D2 should interfere constructively– despite a loss in optical coherence with propagation through the rough diffusers – when themirror pair [M1, M2] is located at DT = 0. Consequently, for any integer Q, we expect todetect complete destructive interference between P1 and P2 when DT = QλP ± λP/4. In ourexperiment, we observe this behavior of completely constructive and destructive interference aswell as intermediate states of partial interference at different DT settings.

For the actual experiments, we used a fiber-coupled laser source with a wavelength ofλE = 520 nm modulated by a P-field sinusoidal signal of 1 GHz which corresponds to a free-space P-field wavelength of roughly 30 cm. The diffusers were separated by a distance of DS of34.5 cm and the distance Z between the aperture planeA (plane of the diffusers) and the detection

11

Page 12: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Fig. 10. Comparison between theoretical predictions and experimental measurements of|PNorm | for different DT settings.

Fig. 11. Measurements of 〈I(x, y, z)〉 for different DT settings.

plane Σ was set to 36 cm. Two identical diffusers from the Newport 20DKIT-C3 light-shapingdiffusers kit were used which provided a Gaussian irradiance distribution at the detector plane. AP-field detector to measure |PNorm | was realized by connecting the output of an AC-coupledMenlo System APD210 photo-detector to an Agilent CXA N9000A RF spectrum analyzer. Thespectrum analyzer output |PSum |Peak was normalized in post-processing to obtain |PNorm |.

The mirror pair [M1, M2] was translated and |PNorm | was measured for different unique valuesof DT. In Fig. 10, we plot simultaneously the measurement data-point values of |PNorm | as wellas a plot of the theoretically expected behavior of |PNorm | as a function of DT calculated from(18). Comparing the theoretically predicted and experimentally measured values of |PNorm | atdifferent DT settings, we clearly observe a maximum P-field summation and cancellation at theexpected DT settings as well as a very strong agreement for other DT settings which yield varyinglevels of partial P-field interference.

Using a slow Thorlabs SM05PD1A power meter/PD assembly with an integration time that isfar greater than the time period P-field signal, we also recorded simultaneous measurements ofthe total optical irradiance at the location of the fast PD. Through these measurements, we showthat a change in the P-field values at different DT settings is independent of the total numberof photons present at the detection location which we expected to remain almost constant forany DT. We provide a plot of this average optical irradiance in Fig. 11. As was expected, thenormalized average irradiance 〈ITot(x, y, z)〉 remains almost constant for all DT settings.

12

Page 13: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

(a) (b)

Fig. 12. Comparison between a) conventional LOS E-field imaging and b) P-field basedNLOS imaging.

5. P-field Imaging Approach to NLOS Imaging

5.1. Comparison of NLOS Imaging with a P-field Virtual Camera to LOS Imaging witha Conventional Camera

Fundamental E-field imaging principles defined by the Huygens’ integral in (1) completelydescribe conventional LOS imaging, as is depicted in Fig. 12a, where we consider an imagingsystem comprising a monochromatic EM source, a camera and a point-like target (which weconsider our object under investigation). The monochromatic wave emitted by the source interactswith the object, and is focused at the detector/camera plane by an E-field imaging lens.

In a virtual P-field camera approach for NLOS imaging, the relay wall acts as the apertureof a holographic P-field projection and detection system. After detection of the P-field at allpoints on the aperture, any conceivable imaging system can be realized through digital postprocessing. A simple imaging lens, for example, applies a position dependent phase delay to thesignal followed by a summation of the fields on the camera pixels and a time integral over theabsolute value to reconstruct a 2D image of a scene (Fig. 1b). With the application of correcttime shifts to the received signal, the relay wall can be treated as a virtual P-field lens, whichforms a P-field image of the NLOS at the virtual sensor located behind the relay wall lookingdirectly at the hidden scene. This is depicted in Fig. 12b.

5.2. Limitations of the Phasor-Field model

In the derivation of the Phasor-Field Rayleigh Sommerfeld propagator we made two importantassumptions:

• Light is added incoherently such that the intensities of two overlapping light beams add.

• Light propagates from any point in the the source plane on a spherical wavefront in alldirections.

Another way to say this is that the light used is sufficiently incoherent that the phase of theelectric field is random on scales shorter than the Phasor-Field wavelength. To assure this, werequire that the source surface is diffuse which creates spatial incoherence. If the source surfaceis specular or partially specular, i.e. if it does not randomize the carrier phase, the Phasor-Fieldmodel no longer completely describes light transport. This situation arises when the phasor fieldwave propagates around an occluding object surrounded by free space. To simulate propagationin this case one would have to propagate the field to the plane of the occluding object and fromthere to the destination surface behind the occluder. Since the plane of the occluding object

13

Page 14: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

contains free space, Phasor-Field light propagation does not correctly predict the field behindthe object. A similar situation arises when imaging a NLOS object that is a specular reflector.In these cases, a different model or extension of the Phasor-Field formalism is required. Anextension that can model occlusions has been introduced by Dove et. al. [26].It is interesting to note that, while specular reflectors and occlusions in the scene affect the

validity of the Phasor-Field model, they do not generally affect reconstructions done using thePhasor-Field formalism which look the same regardless of the surface specularies [21]. However,in the presence of occlusions and specularities it is in certain cases possible to perform NLOSreconstructions with higher quality than what is permitted using Phasor-Field methods.

6. Conclusion

In this paper, we introduced the concept of irradiance phasor fields, a complex scalar quantitywith an amplitude and a phase term. We show that P-fields provide an ideal representation forimaging through apertures which exhibit roughness which is in the order of the optical (E-field)wavelength but significantly less than the P-field wavelength. Provided that the optical spatialcoherence of the E-field is less than the P-field wavelength and its temporal coherence is lessthan the detector integration time, the proposed P-field approach provides a means to modelany NLOS imaging system – such as imagers that image around corners – as an LOS imagingsystem. The significant advantage with the P-fields-based approach is the inherent ability to usewell-known techniques in LOS imaging to model any NLOS system by considering partiallyreflective and scattering rough surfaces as P-field apertures and using existing knowledge of lighttransport in LOS imaging to model any NLOS system from there onward. We back our claimswith in-depth near-field and far-field simulation results for uniformly illuminated rectangular andsquare apertures as well as results from carefully designed experiments.

14

Page 15: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Appendix: P-field Imaging Approach

Irradiance Propagation between Two Spatial Locations

The Poynting vector S, which describes energy propagation of any electro-magnetic (EM) wave,is stated as a cross-product of the wave electric field (E-field) vector E and the magnetic fieldvector H as

S = E ×H. (A.1)

Moreover, H can be expressed in terms of the wave vector K, E, the E-field frequency ω and themedium permeability µ as

H =K × Eωµ

, (A.2)

where K can be expressed either in terms of the E-field wavelength λE or likewise ω as

K =2πλE

K =ω

cK . (A.3)

In (A.3), K signifies the unit vector K/|K|. Substituting (A.2) into (A.1), and knowing thatE ·K = 0 for an isotropic medium, S is given by

S =1ωµ[K (E · E)] = |K|

ωµ(E · E) K = (E · E)

ζK (A.4)

using the triple cross-product identity

a × (b × c) = b (a · c) − c (a · b) . (A.5)

In (A.4) the quantity ζ is the impedance of the propagation medium given by

ζ =ωµ

|K| . (A.6)

For time-harmonic E-fields, a similar approach enables the derivation of an expression for thetime-averaged Poynting vector 〈S〉 for optical irradiance [27] where

〈S〉 = 1T

∫ T

0Sdt =

12|K|ωµ

Re[ES · E∗S]K =12

ES · E∗Sζ

K . (A.7)

In (A.7), ES is simply E expressed in the phasor notation. Note that although the expressions in(A.4) and (A.7) are similar, yet they are distinctly different quantities i.e., S is a time-varyingPoynting vector and 〈S〉 is a time-averaged Poynting vector which is usually what a typicalphoto-detector detects.

Next, we use this knowledge to a) rigorously derive the irradiance contribution from a location(x ′, y′, 0) within a planeA – which we refer to as the plane of a rough aperture (see Section C ofthis appendix for definition of ’roughness’ that we use for our considerations) – to a location(x, y, z) in the Σ plane which we refer to as the detection plane or the image plane as is shown inFig. 2, and b) derive the cumulative irradiance contribution of all locations within A to location(x, y, z) in Σ. Let

EInc(x ′, y′, 0) = E0(x ′, y′, 0) cos (ωt)e′ (A.8)

be the incident E-field at (x ′, y′, 0) in A where e′ denotes the E-field polarization unit vector.Then the transmitted E-field contribution from the same location (x ′, y′, 0) in A is given by

E(x ′, y′, 0) = t0(x ′, y′, 0)E0(x ′, y′, 0) cos (ωt + ∆φK (x ′, y′, 0))e, (A.9)

15

Page 16: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

where t0(x ′, y′, 0) is the magnitude and ∆φK (x ′, y′, 0) denotes the random phase contributionto the transmission coefficient due to aperture roughness at location (x ′, y′, 0) of the E-fieldtransmission coefficient τ(x ′, y′, 0) = t0(x ′, y′, 0)e j∆φK(x′,y′,0). The unit vector e denotes thepolarization of the transmitted E-field which can be different from the initial polarization e′ ofEInc(x ′, y′, 0). Moreover, owing to its time-harmonic nature, E(x ′, y′, 0) can be expressed as aphasor quantity (where subscript S denotes phasor-equivalent notation for the E-field contributionfrom (x ′, y′, 0) ∈ A)

ES(x ′, y′, 0) = t0(x ′, y′, 0)E0(x ′, y′, 0)e j∆φK (x′,y′,0)e. (A.10)

The E-field contribution from (x ′, y′, 0) ∈ A to (x, y, z) ∈ Σ is expressed as a spherical E-fieldwavelet E with a 1/|r | amplitude drop-off and a φK = K |r | phase accumulated due to propagation.E is given by

E = E(x, y, z) = t0(x ′, y′, 0)E0(x ′, y′, 0)|r | cos (ωt + K |r | + ∆φK (x ′, y′, 0))e, (A.11)

where |r | =√(x − x ′)2 + (y − y′)2 + z2 is the distance between locations (x ′, y′, 0) and (x, y, z),

and K = |K| = ω/c is the E-field wave number. As was the case in (A.10), E(x, y, z) can also beexpressed in the phasor notation as ES(x, y, z) where

ES(x, y, z) =1|r | t0(x

′, y′, 0)E0(x ′, y′, 0)e j[∆φK (x′,y′,0)]e jK |r | e︸ ︷︷ ︸EM (x,y,z)

=1|r |EM(x, y, z). (A.12)

In (A.12), EM(x, y, z) is the phasor representation of the E-field ES(x, y, z) contributed from(x ′, y′, 0) ∈ A without the 1/|r | amplitude drop-off. In other words, we simply define anew quantity EM(x, y, z) (still in the phasor notation) as the E-field ES(x ′, y′, 0) from location(x ′, y′, 0) ∈ A with the propagation phase added but without the amplitude drop-off i.e.

EM(x, y, z) = ES(x ′, y′, 0)e j(K |r |). (A.13)

It follows from (A.7) and (A.13), that the corresponding time-averaged optical irradiancecontribution I(x ′, y′, 0) emanating from aperture location (x ′, y′, 0) can be expressed both interms of ES(x ′, y′, 0) or EM(x, y, z) as

I(x ′, y′, 0) =ES(x ′, y′, 0) · E∗S(x ′, y′, 0)

2ζ=

EM(x, y, z) · E∗M(x, y, z)2ζ

=|E0(x ′, y′, 0)|2

2ζ. (A.14)

Similarly, the corresponding time-averaged optical irradiance contribution I(x, y, z) at (x, y, z) ∈ Σfrom (x ′, y′, 0) ∈ A is expressed as

I(x, y, z) =ES(x, y, z) · E∗S(x, y, z)

2ζ=

EM(x, y, z) · E∗M(x, y, z)2ζ |r |2

=I(x ′, y′, 0)|r |2

. (A.15)

Having defined this basic E-field framework for this manuscript, we now proceed to derive theP-field integral in the following section.

P-field Propagation between Aperture and Detection Planes

Now let us consider the case when optical irradiance is directly amplitude-modulated. A directamplitude modulation of optical irradiance is expressed as the multiplication of the time-varyingPoynting vector S by a non-negative scalar modulating function P(t) of frequency Ω and anamplitude of P0 where

P (t) = P0 [1 + cos (Ωt)] . (A.16)

16

Page 17: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

After propagation from (x ′, y′, 0) to (x, y, z), the modulation envelope P(t) is phase-shifted by β |r |due to propagation and ∆φβ(x ′, y′, 0) due to aperture roughness at (x ′, y′, 0). This phase-shiftedenvelope at (x, y, z) is expressed as

P (r, t) = P0[1 + cos

(Ωt + β |r | + ∆φβ(x ′, y′, 0)

) ]. (A.17)

In (A.17), β = 2π/λP is the wave number of P(r, t) expressed in terms of its wavelength λP. Wedemonstrate later that this modulating function P(r, t) scaled by the time-averaged unmodulatedoptical irradiance contribution is a ’Huygens-like’ phasor field (P-field) contribution from adiffuse aperture considered rough as per the definitions in Section 6. We can also write (A.17) as

P (r, t) = 2P0 cos2(Ωt2+β |r |2+∆φβ(x ′, y′, 0)

2

), (A.18)

which we can also express as

P(r, t) =(Re

[PMe

jΩt2

] ) (Re

[PMe

jΩt2

] ). (A.19)

The spatially-dependent phasor representation of P(r, t) in (A.19) is denoted by PM which isstated as

PM =√

2P0e j(β |r |/2+∆φβ (x′,y′,0)/2). (A.20)

In the following steps - in order to avoid long expressions - we denote EM(x, y, z) as simply EM.From the definitions of EM and PM in (A.13) and (A.20) respectively, the time-averaged Poyntingvector 〈S(x, y, z)〉 signifying the amplitude modulated irradiance contribution at (x, y, z) from(x ′, y′, 0) is expressed as

〈S(x, y, z)〉 = L|r |2ζ

[1T

∫ T

0

(Re

[EMe jωt

]Re

[PMe jΩt/2

] )·(Re

[EMe jωt

]Re

[PMe jΩt/2

] )dt

]K,

(A.21)

where L – a loss factor – is introduced to account for irradiance loss due to scattering/absorptionat the aperture and in propagation from (x ′, y′, 0) to (x, y, z) and T is the detector integration time.Consequently, 〈S(x, y, z)〉 can be simply expressed as

〈S(x, y, z)〉 = L|r |2ζ

[1T

∫ T

0

(EMe jωt + E∗Me−jωt

2

) (PMe jΩt/2 + P∗Me−jΩt/2

2

)·(EMe jωt + E∗Me−jωt

2

) (PMe jΩt/2 + P∗Me−jΩt/2

2

)dt

]K . (A.22)

It is to be noted that a photo-detector is only able tomeasure changes to the time-averaged irradiancemagnitude I(x, y, z) of the Poynting vector. Therefore, from hereon, we omit the unit vectorK signifying the direction of electro-magnetic energy flow and denote |〈S(x, y, z)〉| = I(x, y, z).For the case when the average aperture roughness is larger than the optical wavelength λE andwhere the spatial coherence of the optical carrier is consequently reduced to less than the P-fieldwavelength λP, the total irradiance contribution ITot(x, y, z) from all aperture locations to a singlelocation (x, y, z) in Σ can be expressed as the sum of all independent irradiance contributionsfrom each location within A i.e.,

ITot(x, y, z) ∝∫A

(1T

∫ T

0

I(x ′, y′, 0)|r |2

dt)

dx ′dy′. (A.23)

17

Page 18: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Defining Ω′ B Ω/2 in (A.22) and expanding the expression in (A.22) using the incoherentirradiance summation of (A.23), ITot(x, y, z) can be expressed as

ITot(x, y, z) ≈(LKP16ζ

) ∫A

1|r |2

(1T

∫ T

0

[EM · EMe2jωtPMPMe2jΩ′t︸ ︷︷ ︸

Term1

+

+ E∗M · E∗Me−2jωtP∗MP∗Me−2jΩ′t︸ ︷︷ ︸Term2

+EM · EMe2jωtPMP∗M︸ ︷︷ ︸Term3

+E∗M · E∗Me−2jωtP∗MPM︸ ︷︷ ︸Term4

+

+ EM · EMe2jωtP∗MPM︸ ︷︷ ︸Term5

+E∗M · E∗Me−2jωtPMP∗M︸ ︷︷ ︸Term6

+

+ EM · EMe2jωtP∗MP∗Me−2jΩ′t︸ ︷︷ ︸Term7

+E∗M · E∗Me−2jωtPMPMe2jΩ′t︸ ︷︷ ︸Term8

]dt

)dx ′dy′

+(LKP16ζ

) ∫A

1|r |2

(1T

∫ T

0

[EM · E∗MPMPMe2jΩ′t︸ ︷︷ ︸

Term9

+E∗M · EMP∗MP∗Me−2jΩ′t︸ ︷︷ ︸Term10

+ EM · E∗MPMP∗M︸ ︷︷ ︸Term11

+E∗M · EMP∗MPM︸ ︷︷ ︸Term12

+ EM · E∗MP∗MP∗Me−2jΩ′t︸ ︷︷ ︸Term13

+E∗M · EMPMPMe2jΩ′t︸ ︷︷ ︸Term14

+

+ EM · E∗MP∗MPM︸ ︷︷ ︸Term15

+E∗M · EMPMP∗M︸ ︷︷ ︸Term16

]dt

)dx ′dy′. (A.24)

In (A.24), KP is a coefficient of proportionality. For a detector with an integration time T where

2πω T 2π

Ω, (A.25)

the detector integration time is much longer than the time period of the extremely high frequency2ω and allows the time-average of terms 1–8 in (A.24) to be approximated as zero. Recognizingthat terms 10, 12, 14, 16 are the respective complex conjugates of terms 9, 11, 13, 15, we canstate

ITot(x, y, z) ≈(

KPL16ζ

) ∫A

[1T

∫ t+T

t

1|r |2

(2 Re

[EM · E∗MPMPMe2jΩ′τ

]+

2 Re[EM · E∗MPMP∗M

]+ 2 Re

[EM · E∗MP∗MPM

]+

+ 2 Re[EM · E∗MP∗MP∗Me−2jΩ′τ

] )dτ

]dx ′dy′. (A.26)

Moreover, ITot(x, y, z) is expressed as ITot(x, y, z, t) because the slowly changing irradianceenvelope still results in a time-dependence of different irradiance measurements over a fixed

18

Page 19: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

duration of the detector integration time window. Substituting (A.14) in (A.26), we obtain

ITot(x, y, z, t) ≈KPL

4

∫A

[1T

∫ t+T

t

I(x ′, y′, 0)|r |2

(Re

[PMPMe2jΩ′τ

]+ Re

[PMP∗M

]+

+ Re[P∗MPM

]+ Re

[P∗MP∗Me−2jΩ′τ

] )dτ

]dx ′dy′. (A.27)

Substituting PM from (A.20), we can express (A.27) as

ITot(x, y, z, t) ≈KPL

2

∫A

[1T

∫ t+T

t

1|r |2

(P0I(x ′, y′, 0) cos (Ωτ + β |r | + ∆φβ(x ′, y′, 0))+

+ 2P0I(x ′, y′, 0) + P0I(x ′, y′, 0) cos (−Ωτ − β |r | − ∆φβ(x ′, y′, 0)))dτ

]dx ′dy′. (A.28)

If the P-field amplitude contribution P0,Ω(x ′, y′, z) from any aperture location (x ′, y′, 0) is definedas P0,Ω(x ′, y′, 0) = I(x ′, y′, 0)P0,

ITot(x, y, z, t) ≈ KPL∫A

[1T

∫ t+T

t

1|r |2

(P0,Ω(x ′, y′, 0)[1+cos (Ωτ + β |r | + ∆φβ(x ′, y′, 0))]

)dτ

]dx ′dy′.

(A.29)

When an AC-coupled photo-detector is used for detection, the DC components of all P-fieldcontributions filter out which allows us to detect signed P-field contributions expressed as

ITot(x, y, z, t) ≈ KPL∫A

[1T

∫ t+T

t

1|r |2

(P0,Ω(x ′, y′, 0) cos [Ωτ + β |r | + ∆φβ(x ′, y′, 0)]

)dτ

]dx ′dy′.

(A.30)Considering the fact that T 2π/Ω, the cosine with frequencyΩ approximately does not changewithin the interval T , so we can write

ITot(x, y, z, t) ≈ KPL∫A

1|r |2

(P0,Ω(x ′, y′, 0) cos [Ωt + β |r | + ∆φβ(x ′, y′, 0)]

)dx ′dy′. (A.31)

Moreover, attributing to the time-harmonic nature of P-fields, ITot (x, y, z, t) is expressed as asum of P-field wavelet contributions P(r) as

ITot(x, y, z) ≈ KPL∫A

1|r | P0,Ω(x ′, y′, 0)

e jβ |r |

|r |︸ ︷︷ ︸P(r)

e j∆φβ (x′,y′,0)dx ′dy′, (A.32)

where P(r) is the phasor-equivalent representation of each P-field wavelet contribution P(r, t)given by

P(r, t) = P0,Ω(x ′, y′, 0)|r | cos (Ωt + β |r |). (A.33)

As long as the E-field spatial coherence is less than the P-field wavelength and the E-fieldtemporal coherence is less than the detector integration time, this sum of P-field contributionsin (A.32) completely describes light transport through rough apertures (such as apertures usedin NLOS scenarios) in a Huygens-like formulation (shown in (1) as a sum of E-field waveletcontributions) which describes conventional LOS imaging.

19

Page 20: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

Suppose we define a detection scheme - which we shall refer to as a P-field detector - whichsimply detects the scalar peak amplitude |ITot(x, y, z)| of the P-field envelope ITot(x, y, z) at everylocation (x, y, z) in Σ where

|ITot(x, y, z)| =KPL

∫A

1|r | P(r)e

j∆φβ (x′,y′,0)dx ′dy′. (A.34)

Moreover, when either the near-field or the far-field approximation

z π

λP(x2 + y2 + x ′2 + y′2) (A.35)

z3 π

4λP

[(x − x ′)2 + (y − y′)2

]2Max , (A.36)

hold, then (A.34) can be expressed as

|ITot(x, y, z)| =KPL

z

∫A

[P0,Ω(x ′, y′, 0)

e jβ |r |

|r |

]e j∆φβ (x′,y′,0)dx ′dy′

. (A.37)

The subscript ’Max’ in (A.36) refers to the maximum possible value of the sum of coordinatesinside of the brackets. On the other hand, when neither of the near-field or far-field approximationsare applicable, then (A.37), with the inclusion of aP-field amplitude correction factorC(x, y, z, |r |)as shown in Sec.2.2, can be expressed as

|ITot(x, y, z)| =KPLC(x, y, z, |r |)|r(x, y, z)|Av

∫A

[P0,Ω(x ′, y′, 0)

e jβ |r |

|r |

]e j∆φβ (x′,y′,0)dx ′dy′

. (A.38)

Furthermore, if the aperture roughness is considered negligible for the case when λE λP=⇒ ∆φβ ≈ 0 ∀(x ′, y′, 0) ∈ A, we can express (A.34) as

|ITot−F(x, y, z)| =KPL

∫A

1|r | P(r)dx ′dy′

. (A.39)

Moreover, we can express (A.39) analogously to (A.37) and (A.38) for the far-field/near-fieldand the ultra near-field cases respectively as

|ITot−F(x, y, z)| =KPL

z

∫AP(r)dx ′dy′

, (A.40)

and|ITot−F(x, y, z)| =

KPLC(x, y, z, |r |)|r(x, y, z)|Av

∫AP(r)dx ′dy′

. (A.41)

We refer to (A.40) as the P-field integral for the near-field and the far-field cases and (A.41) asthe P-field integral for the ultra near-field imaging case where the additional subscript F denotesa quasi-flat relay wall with enough roughness to randomize the E-field phase but insufficient toincur any amount of significant random phase change to the P-field contributions.

Impact of Aperture Roughness on P-field Phase

For the propagation of modulated optical irradiance from an aperture plane A to the detectionplane Σ, we consider the relative effect of the aperture roughness on the E-field and P-fieldphases. For the visible spectrum, a frosted glass or a lens with a ground glass side are examplesof partially-transmissive rough apertures. On the other hand, a painted wall is one example of a

20

Page 21: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

partially-reflective rough aperture. Now let us assume that for a rough aperture (refer to Fig.2),the E-field transmission function τ(x ′, y′, 0) is given by

τ(x ′, y′, 0) = t0(x ′, y′, 0) e j∆φR (x′,y′,0), (A.42)

where t0(x ′, y′, 0) is the location-dependent E-field amplitude transmissivity of the aperture and∆φR(x ′, y′, 0) is a random phase variable denoting a random E-field phase contribution fromany location (x ′, y′, 0). In this paper, we consider an aperture ’rough’ if any random aperturephase contribution from an arbitrary location (x ′, y′, 0) ∈ A results in a corresponding randomphase change ∆φK (x ′, y′, 0) to the E-field contribution at (x ′, y′, 0) yet the resulting P-field phasechange ∆φβ(x ′, y′, 0) is negligible, i.e., ∆φβ(x ′, y′, 0) 2π. Mathematically speaking, withoutloss of generality, we assume that the surface has a minimum roughness of 0 and a maximumroughness of γ (having the unit of length) with λE γ λP. We therefore know that

∆φK (x ′, y′, 0) ≤2πλE

γ (A.43)

and∆φβ(x ′, y′, 0) ≤

2πλP

γ . (A.44)

This implies that∆φβ(x ′, y′, 0) =

λEλP∆φK (x ′, y′, 0). (A.45)

From (A.45), given λE γ λP, it can be inferred that the E-field phase shift at any aperturelocation is randomly distributed in the interval [0, 2π] while the P-field phase shift

∆φβ(x ′, y′, 0) ≈ 0 . (A.46)

This means that an aperture surface with roughness greater than the E-field wavelength but lesserthan the P-field wavelength results in randomizing the E-field phase, while the P-field phaseremains unaltered.

Acknowledgments

The authors thank Jeremy Teichman for valuable discussions and suggestions.

Funding

This work was funded by DARPA through the DARPA REVEAL project (HR0011-16-C-0025),the NASA Innovative Advanced Concepts (NIAC) program (NNX15AQ29G), and the Office ofNaval Research (ONR, N00014-15-1-2652).

References1. A. Velten, T. Willwacher, O. Gupta, A. Veeraraghavan, M. G. Bawendi, and R. Raskar, “Recovering three-dimensional

shape around a corner using ultrafast time-of-flight imaging,” Nat Commun 3, 745 (2012).2. M. Buttafava et al., “Non-line-of-sight imaging using a time-gated single photon avalanche diode,” Opt. Express 23,

20997–21011 (2015).3. R. Ramesh and J. Davis, “5d time-light transport matrix: What can we reason about scene properties?” Tech. rep.,

MIT (2008).4. O. Katz, E. Small, and Y. Silberberg, “Looking around corners and through thin turbid layers in real time with

scattered incoherent light,” Nat. photonics 6, 549 (2012).5. A. Kadambi, H. Zhao, B. Shi, and R. Raskar, “Occluded imaging with time-of-flight sensors,” ACM Trans. Graph.

35, 15:1–15:12 (2016).6. M. M. Balaji, A. Viswanath, P. Rangarajan, D. MacFarlane, and M. P. Christensen, “Resolving non line-of-sight

(nlos) motion using speckle,” in Computational Optical Sensing and Imaging, (Optical Society of America, 2018),pp. CM2E–2.

21

Page 22: SYED AZER REZA 1 MARCO LA MANNA 1 AND ANDREAS … · A Physical Light Transport Model for Non-Line-of-Sight Imaging Applications SYED AZER REZA,1 MARCO LA MANNA,1, AND ANDREAS VELTEN1,2,*

7. P. Sen, B. Chen, G. Garg, S. R. Marschner, M. Horowitz, M. Levoy, and H. P. A. Lensch, “Dual photography,” ACMTrans. Graph. 24, 745–755 (2005).

8. S. M. Seitz, Y. Matsushita, and K. N. Kutulakos, “A theory of inverse light transport,” in Tenth IEEE InternationalConference on Computer Vision (ICCV’05) Volume 1, vol. 2 (2005), pp. 1440–1447.

9. A. Kirmani, T. Hutchison, J. Davis, and R. Raskar, “Looking around the corner using ultrafast transient imaging,” Int.J. Comput. Vis. 95, 13–28 (2011).

10. O. Gupta, T. Willwacher, A. Velten, A. Veeraraghavan, and R. Raskar, “Reconstruction of hidden 3D shapes usingdiffuse reflections,” Opt. Express 20, 19096–19108 (2012).

11. D. Wu et al., Frequency Analysis of Transient Light Transport with Applications in Bare Sensor Imaging (SpringerBerlin Heidelberg, Berlin, Heidelberg, 2012), pp. 542–555.

12. F. Heide, M. B. Hullin, J. Gregson, and W. Heidrich, “Low-budget transient imaging using photonic mixer devices,”ACM Trans. Graph. 32, 45:1–45:10 (2013).

13. F. Heide, L. Xiao, W. Heidrich, and M. B. Hullin, “Diffuse mirrors: 3D reconstruction from diffuse indirectillumination using inexpensive time-of-flight sensors,” in 2014 IEEE Conference on Computer Vision and PatternRecognition, (2014), pp. 3222–3229.

14. F. Heide, W. Heidrich, M. Hullin, and G. Wetzstein, “Doppler time-of-flight imaging,” ACM Trans. Graph. 34,36:1–36:11 (2015).

15. M. Laurenzis and A. Velten, “Nonline-of-sight laser gated viewing of scattered photons,” Opt. Eng. 53, 023102(2014).

16. C.-Y. Tsai, K. N. Kutulakos, S. G. Narasimhan, and A. C. Sankaranarayanan, “The geometry of first-returning photonsfor non-line-of-sight imaging,” in IEEE Intl. Conf. Computer Vision and Pattern Recognition (CVPR), (2017).

17. M. LaManna, F. Kine, E. Breitbach, J. Jackson, and A. Velten, “Error backprojection algorithhms for non-line-of-sightimaging,” MINDS@UW (2017).

18. J. Iseringhausen and M. B. Hullin, “Non-line-of-sight reconstruction using efficient transient rendering,” arXivpreprint arXiv:1809.08044 (2018).

19. F. Heide, M. O’Toole, K. Zang, D. Lindell, S. Diamond, and G. Wetzstein, “Non-line-of-sight imaging with partialoccluders and surface normals,” arXiv preprint arXiv:1711.07134 (2017).

20. X. Liu, S. Bauer, and A. Velten, “Analysis of feature visibility in non-line-of-sight measurements,” IEEE/CVF Conf.on Comput. Vis. Pattern Recognit. (CVPR) (2019).

21. X. Liu, I. Guillén, M. La Manna, J. H. Nam, S. A. Reza, T. H. Le, D. Gutierrez, A. Jarabo, and A. Velten, “Virtualwave optics for non-line-of-sight imaging,” arXiv preprint arXiv:1810.07535 (2018).

22. D. B. Lindell, G. Wetzstein, and M. O’Toole, “Wave-based non-line-of-sight imaging using fast fk migration,” ACMTransactions on Graph. (TOG) 38, 116 (2019).

23. M. Gupta, S. K. Nayar, M. B. Hullin, and J. Martin, “Phasor imaging: A generalization of correlation-basedtime-of-flight imaging,” ACM Trans. Graph. 34, 156:1–156:18 (2015).

24. W. Goodman, Introduction to Fourier Optics (Mc-Graw Hill, New York, NY, 1996).25. W. Stutzman and G. Thiele, Antenna Theory and Design (Wiley, Hoboken, NJ, 1998), 2nd ed.26. J. Dove and J. H. Shapiro, “Paraxial theory of phasor-field imaging,” Opt. Express 27, 18016–18037 (2019).27. S. Silver, Microwave antenna theory and design, 19 (Iet, 1984).

22