Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 /...

9
Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded diffraction patterns: simulation study and experiments V LADIMIR KATKOVNIK 1 ,I GOR S HEVKUNOV 1,2,* ,NIKOLAY V. P ETROV 2 , AND KAREN E GIAZARIAN 1 1 Laboratory of Signal Processing, Tampere University of Technology, P.O. Box 553 FI-33101 Tampere, Finland 2 Department of Photonics and Optical Information Technology, ITMO University, St. Petersburg, Russia * Corresponding author: [email protected] Compiled April 10, 2017 Computational super-resolution inverse diffraction phase retrieval is considered. The optical setup is lensless with a spatial light modulator (SLM) for aperture phase coding. The paper is focused on experimental tests of the Super-Resolution Sparse Phase Amplitude Retrieval (SR- SPAR) algorithm. We start from simulation tests and go to physical experiments. Both sim- ulation tests and experiments demonstrate a good quality imaging for super-resolution with the factor 4, which means that the computational pixels of the reconstructed object are 4 times smaller than the sensor pixels. © 2016 Optical Society of America OCIS codes: (100.6640) Superresolution, (100.5070) Phase retrieval, (100.0100) Image processing, (110.4280) Noise in imaging systems, (070.2025) Discrete optical signal processing. http://dx.doi.org/10.1364/optica.XX.XXXXXX 1. INTRODUCTION A. Problem setup The phase retrieval problem concerns reconstruction of 2D and 3D complex-domain objects provided that only intensity of light radiation is measured. Such problems appear in science and engineering with a wide area of applications from astron- omy and optics to medicine and biology (e.g. [13]). The standard formalization of the phase retrieval problem is of the form: y s = |P s {u o }| 2 , s = 1, ..., L, (1) where: u o C N×N is an N × N complex-valued 2D image of object (specimen); P s : C N×N −→ C M×M is a complex-valued forward propagation operator from object to sensor planes, y s R M×M + are M × M intensity measurements at the sensor plane, L is a number of experiments, and u s =P s {u o } is a nota- tion for the complex-valued wavefront at the sensor plane. In this paper we consider the setup with the aperture modu- lation such that u s = P s {u o } = P{M s u o }. (2) Here M s C N×N are phase modulation masks, M s (k, l )= exp( jφ k,l (s)), and the ’’ stands for the entry-wise (Hadamard) product of two matrices. The phases φ k,l (s) in M s can be generated as deterministic or random. It results in the observations known as coded diffrac- tion patterns (e.g. [4]): y s = |P{M s u o }| 2 , s = 1, ..., L. (3) The phase modulation is able to change dramatically the diffrac- tion pattern of P{u o } enabling redistribution of observed inten- sities from low to higher frequencies. Various methods are developed in order to make these P s sufficiently different in order to gain observation diversity en- abling finding u o from observations {y s } L 1 . A defocussing of the registered images is one of the instruments to get a sufficient phase diversity [59]. In the recent development a spatial light modulator (SLM) is exploited for the defocussing (e.g. [10, 11]). A modulation of the wavefront is another efficient tool to achieve the desirable diversity of observations (e.g. [1214]). For noisy observations Eq. (3) is changed for z s =G{|u s | 2 }, s = 1, ..., L, (4)

Transcript of Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 /...

Page 1: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 1

Computational super-resolution phase retrieval frommultiple phase-coded diffraction patterns:simulation study and experimentsVLADIMIR KATKOVNIK1, IGOR SHEVKUNOV 1,2,*, NIKOLAY V. PETROV2, AND

KAREN EGIAZARIAN1

1Laboratory of Signal Processing, Tampere University of Technology, P.O. Box 553 FI-33101 Tampere, Finland2Department of Photonics and Optical Information Technology, ITMO University, St. Petersburg, Russia*Corresponding author: [email protected]

Compiled April 10, 2017

Computational super-resolution inverse diffraction phase retrieval is considered. The optical

setup is lensless with a spatial light modulator (SLM) for aperture phase coding. The paper is

focused on experimental tests of the Super-Resolution Sparse Phase Amplitude Retrieval (SR-

SPAR) algorithm. We start from simulation tests and go to physical experiments. Both sim-

ulation tests and experiments demonstrate a good quality imaging for super-resolution with

the factor 4, which means that the computational pixels of the reconstructed object are 4 times

smaller than the sensor pixels. © 2016 Optical Society of America

OCIS codes: (100.6640) Superresolution, (100.5070) Phase retrieval, (100.0100) Image processing, (110.4280) Noise inimaging systems, (070.2025) Discrete optical signal processing.

http://dx.doi.org/10.1364/optica.XX.XXXXXX

1. INTRODUCTION

A. Problem setup

The phase retrieval problem concerns reconstruction of 2D and3D complex-domain objects provided that only intensity oflight radiation is measured. Such problems appear in scienceand engineering with a wide area of applications from astron-omy and optics to medicine and biology (e.g. [1–3]).

The standard formalization of the phase retrieval problem isof the form:

ys= |Ps{uo}|2, s = 1, ..., L, (1)

where: uo∈ CN×N is an N × N complex-valued 2D image of

object (specimen); Ps: CN×N 7−→ CM×M is a complex-valuedforward propagation operator from object to sensor planes,

ys∈ RM×M+ are M × M intensity measurements at the sensor

plane, L is a number of experiments, and us=Ps{uo} is a nota-tion for the complex-valued wavefront at the sensor plane.

In this paper we consider the setup with the aperture modu-lation such that

us = Ps{uo} = P{Ms ◦ uo}. (2)

Here Ms∈ CN×N are phase modulation masks, Ms(k, l) =

exp(jφk,l(s)), and the ’◦’ stands for the entry-wise (Hadamard)product of two matrices.

The phases φk,l(s) in Ms can be generated as deterministicor random. It results in the observations known as coded diffrac-tion patterns (e.g. [4]):

ys= |P{Ms ◦ uo}|2, s = 1, ..., L. (3)

The phase modulation is able to change dramatically the diffrac-tion pattern of P{uo} enabling redistribution of observed inten-sities from low to higher frequencies.

Various methods are developed in order to make these Ps

sufficiently different in order to gain observation diversity en-abling finding uo from observations {ys}L

1 . A defocussing ofthe registered images is one of the instruments to get a sufficientphase diversity [5–9]. In the recent development a spatial lightmodulator (SLM) is exploited for the defocussing (e.g. [10, 11]).

A modulation of the wavefront is another efficient tool toachieve the desirable diversity of observations (e.g. [12–14]).

For noisy observations Eq. (3) is changed for

zs=G{|us|2}, s = 1, ..., L, (4)

Page 2: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 2

where G stands for a generator of random observations.

For the wavefront propagation from the object to the sensorplane we use the Rayleigh-Sommerfeld model with the transferfunction defined through the angular spectrum (AS) ([15], Eq.(3-74)):

us(x, y, z) = F−1[

H( fx, fy, z) · F[

us(x, y, 0)]

]

(5)

H( fx, fy, z) =

{

exp[

i 2πλ z

1 − λ2( f 2x + f 2

y )]

, f 2x + f 2

y ≤ 1λ2 ,

0, otherwise,(6)

where the operators F and F−1 stay for the Fourier transform(FT) and inverse Fourier transforms, respectively; us(x, y, z) is awavefront propagated on the distance z from the initial positionus(x, y, 0); H( fx, fy, z) is the angular spectrum transfer function;fx, fy are spatial frequencies; x, y denote spatial coordinates,and λ is the wavelength.

In this paper we assume that the observations have Poisso-nian distribution typical for optical photon counting. This pro-cess is well modelled by independent Poissonian random vari-ables in the form

p(zs[l] = k) = exp(−ys[l]χ)(ys[l]χ)k

k!, (7)

where p(zs[l] = k) is a probability that the random observationzs[l] takes integer value k ≥ 0 and ys[l] is the intensity of thewavefront, defined by Eq. (1), at the pixel l.

The parameter χ > 0 in Eq. (7) is a scaling factor of thePoisson distribution. Defining the observation signal-to-noiseratio (SNR) as the ratio between the square of the mean andthe variance of zs, we have SNR = E2{zs}/var{zs} = ysχ.It follows, that the relative noisiness of observations becomesstronger as χ → 0 (SNR → 0) and approaches zero whenχ → ∞ (SNR → ∞). The latter case corresponds to the noise-less scenario, i.e. zs/χ → ys with probability 1. The scale pa-rameter χ is of importance for modeling as it allows to controla level of noise in observations.

Reconstruction of the complex-valued object uo (phase andamplitude) from noiseless or noisy observations is phase re-trieval problem. Here phase emphasizes, that in the object thephase is a variable of the first priority, while the amplitude maybe an auxiliary variable often useful only in order to improvethe phase imaging.

B. Phase retrieval algorithms

There is a flow of publications on the phase retrieval. Variousversions of the Gerchberg-Saxton (GS) technique are quite uni-versal for different setups (e.g. [7–9, 16, 17]). These algorithmsbased on alternating projections between the object and obser-vations planes are flexible in order to incorporate any informa-tion available for variables in these planes. Recent develop-ments in this area as well as the review can be seen in [18].

Contrary to this type intuitive heuristic algorithms the vari-ational formulations have a stronger mathematical backgroundand based on numerical algorithms solving some optimizationproblems (e.g. [19–21]).

A new variational algorithm for the phase retrieval fromnoisy data based on transform domain sparsity for the objectphase and amplitude is developed in [22]. The sparsity conceptas a tool for phase retrieval is a topic of the paper [23], wherethe original sparsifying learning transform is developed.

The phase retrieval from coded diffraction patterns is of spe-cial interest in the recent publications (e.g. [13, 24]). The unique-ness of the phase retrieval in this scenario is proved in the laterpaper.

The spatial resolution of phase retrieval is limited by twofactors: low-pass filtering by the propagation operator P andthe size of sensor’s pixels. High-frequency spatial informationis lost in intensity observations, which can be treated as corre-sponding to a sub-sampled true object uo. The super-resolutionimaging allows to compensate this sub-sampling effect.

One of the straightforward approaches to overcome the pixelsize limitations is to use a sequence of laterally shifted holo-grams (e.g. [25–27]).

The computational approach to the super-resolution is dif-ferent assuming that the optical setup is fixed and the super-resolution image should be extracted from the data available.Note, that for the real-domain the computational imaging isa hot topic with a flow of publication, while for the complex-domain, phase/amplitude imaging, the situation is quite differ-ent actually with a few publications, which can be referred, inparticular, for phase retrieval.

Compressed sensing (CS) is one of the comparatively newcomputational techniques for restoration of sub-sampled data.Application of this sort of the techniques in optics can be foundin [28, 29]. CS can be treated as a super-resolution, however,there is a deep difference between these two problems. CS isfocused on the sub-sampling design with the smallest numberof observations as a target, while in the super-resolution theobservations are given on a usually regular and fixed grid.

C. Contribution and structure of this paper

The transform domain phase/amplitude sparsity introduced in[22] was generalized for the super-resolution phase retrieval in[30]. In this paper we modify this algorithm for the lensless op-tical setup with the AS transfer function for wavefront propaga-tion, Eqs. (5-6). Contrary to it, the lenslet and FT propagationmodel are used in [30]. We provide simulation test and exper-imental results demonstrating that the proposed algorithm en-ables the super-resolution phase-retrieval with the up-samplingfactor upto 4.

In what follows the proposed algorithm is presented in Sec-tion 2. The experiment’s details and results are discussed inSection 3.

2. SUPER-RESOLUTION PHASE RETRIEVAL

A. Complex domain sparsity

The complex domain sparsity for the object as it is used in thispaper assumes that there are small fragments (patches) of theobject uo, which have similar features. These similar patchescan be found in different parts of the image. It follows that thesepatches taken together admit sparse representations: they can bewell approximated by linear combinations of a small number ofbasic functions.

The complex domain images, such as the object uo =Bo exp(iϕo), are defined by two variables: phase ϕo and ampli-tude Bo. Respectively, the sparse representation can be imposedon the following pairs of the real-valued variables:

(1) The phase ϕ (interferometric or absolute) and the ampli-tude Bo;

(2) The real and imaginary parts of uo.

Page 3: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 3

In this paper we apply the absolute phase/amplitude spar-sity following to [31, 32], where it has been already tested forvarious optical problems.

The success of the sparsity approach depends on how reachand redundant are dictionaries (sets of basic functions) usedfor image analysis and synthesis. In this paper for the spar-sity implementation we use the Block-Matching and 3D filter-ing (BM3D) algorithm [33].

Let us mention basic steps of this advanced techniqueknown as nonlocal self-similarity sparsity. At the first stage theimage is partitioned into small overlapping square patches. Foreach patch a group of similar patches is collected which arestacked together to form a 3D array (group). This stage iscalled grouping. The entire 3D group-array is projected ontoa 3D predefined transform basis. The spectral coefficients ob-tained as a result of this transform are hard-thresholded (smallcoefficients are zeroed) and the inverse 3D transform gives thefiltered patches, which are returned to the original position ofthese patches in the image. This stage is called collaborative fil-tering. This process is repeated for all pixels of the entire imageand obtained overlapped filtered patches are aggregated in thefinal image estimate. This last stage is called aggregation. Thedetails of this algorithm can be seen in [33].

The links of this algorithm with the general sparsity conceptare revealed in [34]. It is shown that the grouping operations re-sult in data adaptive analysis and synthesis image transforms(frames), which combined with the hard-thresholding of thesetransforms define the BM3D algorithm. It is emphasized thatthe sparsity is achieved mainly due to the grouping which al-lows to analysis jointly similar patched and this way to guar-anty the sparsity (self-similarity of patches) at least for each ofthe 3D groups.

Note, that the standard BM3D algorithm as it is presentedin the [33] is composed from two successive steps: threshold-ing and Wiener filtering. In this paper we use a simplified ver-sion of BM3D including grouping, transforms and thresholdingwithout Wiener filtering. In this form this algorithm is obtainedfrom the variational setting of the wavefront retrieval problem.

It is shown in [22] that the nonlocal block-wise sparsity im-posed on phase and amplitude results in the separate BM3Dfiltering of phase and amplitude

ϕ̂o = BM3Dphase(ϕo, thϕ), (8)

B̂o = BM3Dampl(Bo, thB). (9)

Here ϕ̂o and B̂o are sparse approximations of ϕo and Bo; phaseand ampl as indices of BM3D are used in order to emphasizethat the parameters of BM3D can be different for the phase andamplitude; thϕ and thB are thresholding parameters of the algo-rithms. The phase in Eq. (8) can be interferometric or absolutedepending on the sparsity formulation.

B. Super-resolution SPAR algorithm

Conventionally the pixels are square ∆SLM ×∆SLM and ∆S ×∆S

for SLM and sensor, respectively. A continuous object is dis-cretized by computational pixels ∆c × ∆c. This discretization isnecessary both for digital data processing as well as for model-ing of wavefront propagation and image formation. Contraryto the pixels of SLM and sensor defined by the correspond-ing optical-electronic devices the object pixels ∆o are computa-tional, which maybe be taken of arbitrary small sizes.

Assuming for a moment ∆c = ∆S, reconstruction of uo fromthe observations {zs} is the conventional phase retrieval with

the resolution for the object dictated by the pixel size of the sen-sor. Let us term this case pixel-resolution imaging.

If ∆c < ∆S we arrive to a much more interesting problem ofpixel super-resolution. It is convenient to assume that ∆S = rs ·∆c,where rs ≥ 1 is an integer pixel super-resolution (up-sampling)factor, and ∆SLM = rSLM · ∆c, rSLM ≥ 1, where r2

SLM is a num-ber of smaller ∆c × ∆c computational object pixels covering asingle SLM pixel. This SLM pixel gives the same modulationphase-shift for all object pixels in this group.

In order to study the various effects of SLM phase modu-lation, we introduce an SLM super-pixel as a larger size pixelM ·∆SLM × M ·∆SLM, which is composed from adjacent M× MSLM’s pixels. In the phase modulation, all correspondingM × M SLM’s pixels provide for modulation the same phaseshift.

The computational wavefront reconstruction is going fromthe continuous domain wavefront propagation Eqs. (5-6) to thecorresponding discrete model based on pixelation of the objectdistribution uo, thus, we arrive to the discrete modeling of thesystem at hand linking the discrete values of sensor output (ob-servations) with the discrete values of the object, where the inte-gral FT is replaced by Fast Fourier Transform (FFT). Note, thatall these calculations are produced for the variables given withthe computation sampling period ∆c.

It is well known, that a discrete modelling of the continu-ous propagation model Eqs. (5-6) imposes the restrictions onthe computational sampling period ∆c with connection to thedistance z, the wavelength λ and the computational aperturesize. With the reference to [35], [36] and [37], for the consideredscenario, these restrictions can be given as the inequality

z ≤ zmax = N∆2c /λ, (10)

where zmax is an upper bound for the distance between the ob-ject and sensor, and N defines the N × N computational sup-port of zeropadded object and sensor, with the computationalapertures size N∆c × N∆c.

In order to simplify the algorithm presentation we preservethe notation P for this discrete model initially introduced forthe continuous variables and integrals. Then the algorithmcan be given in the form similar to the phase retrieval super-resolution algorithm in [30]. We use the abbreviation SR-SPAR(Super-Resolution Sparse Phase Amplitude Retrieval) for thisalgorithm.

It is important to note that SR-SPAR is originated from thevariational formulation introduced for optimal reconstructionof uo from Poissonian observations {zs[k, l]}. The correspond-ing minus log-likelihood for Poissonian observations accordingto Eq. (7) is as follows

L =L

∑s=1

∑k,l

[|us[k, l]|2χ − zs[k, l] log(|us[k, l]|2χ)]. (11)

This criterion should be minimized with respect to uo[k, l]provided the equations linking uo and uS and restrictions im-posed by the sparsity requirements.

The derivation of the algorithm is similar to the techniquedeveloped in [22] for the pixel-resolution phase retrieval andthe FT propagation. The difference mainly concerns the usedpropagation operator and the sampling rates, which are ∆o =∆s in [22] and ∆o = ∆s/rs in this paper, then the observationsshould be upsampled with the factor rs.

We show the block-scheme of this algorithm in Fig. 1 refer-ring to [22] and [30].

Page 4: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 4

Input: {z̃s}, {Ms}, s = 1, ..., L

Initialization:

x0 = 1L ∑

Ls=1 M∗

s · P−1{√z̃s · exp(j · 0)}

1. Forward propagation:

vts = P{Ms · xt}, vt

s ∈ SD , s = 1, ..., L

2. Poissonian noise suppression:

uts =

{

bts exp(j · angle(vt

s)), vts ∈ SS ,

vts , vt

s ∈ SD\SS , s = 1, ..., L

3. Backward propagation:

xt = 1L ∑

Ls=1 M∗

s · P−1{uts}

4. Phase unwrapping:

ϕtabs = W−1(angle(xt))

5. Sparse phase andamplitude filtering:

ϕt+1abs = BM3Dphase(ϕt

abs , thϕ),

Bt+1 = BM3Dampl(abs(xt), tha)

6. Object wavefront update:

xt+1 = Bt+1 exp(jϕt+1abs )

Output: ϕ̂o,abs = ϕT+1abs , B̂o = BT+1.

t=0

t=T

Repeat

T times

t=t+1

Fig. 1. SR-SPAR algorithm.

The inputs z̃s in this algorithm are upsampled by factorrs observations zs. We use the zero-order upsampling givingz̃s as piece-wise invariant with the invariant values for compu-tational pixels corresponding to each of the larger size pixels ofthe sensor. At the initialization step the initial estimate of theobject wavefront x0 is created. At Step 1 the object wavefrontestimate xt is multiplied by the phase mask Ms and propagatesusing the operator P to the sensor plane, the result is denotedas vt

s. These wavefronts are calculated for the computationaldiffraction area N × N denoted SD.

At Step 2 this wavefront is updated to the variable uts by fil-

tering the amplitude of vts according to the given observations

z̃s. The following formula as derived in [22] defines the rulehow the amplitude bs is calculated:

bs =|vs|+

|vs|2 + 4z̃sγ(1 + γ1χ)

2(1 + γ1χ). (12)

These calculations are element wise; γ1 > 0 is the parameterof the algorithm. Naturally, this update is produced providedknown observation z̃s, i.e. for the pixels belonging to the sensorarea SS, vs ∈ SS. In our modeling the computational diffractionarea is always equal or larger than the sensor area, SS ⊂ SD. Forthe area out of the sensor the wavefront values are preserved,us = vs for vs ∈ SD\SS.

At Step 3 the estimates {uts} backpropagate to the object

plane and update the object wavefront estimate xt+1. Here M∗s

means a complex conjugate Ms and P−1{uts} means inverse of

the model Eq. (5).

Laser

SLMλ/4 λ/2 L1 L2

BS

CMOS

Fig. 2. Optical setup. λ/4, λ/4 –retardation plates, L1, L2 –lenses, BS – beamsplitter, SLM – Spatial Light Modulator,CMOS – registration camera.

The unwrapping of the phase with reconstruction of abso-lute phase in Step 4 is necessary only if the object phase rangegoes beyond 2π [38]. The sparsification (filtering on the baseof sparse approximations) is produced in Step 5. At step 6 thefinal iteration estimation is obtained.

The SR-SPAR algorithm is based on the phase retrieval al-gorithm derived by solving the optimization problem for thephase retrieval from Poissonian observations. The formaliza-tion and techniques used in [22] are completely applicable withsmall modifications for derivation of SR-SPAR. Thus, the pre-sented SR-SPAR algorithm is designed as optimal one for Pois-sonian observations.

C. Parameters of the SR-SPAR algorithm

The performance of the SR-SPAR algorithm essentially dependson its parameters. Optimization can be produced for each am-plitude/phase distribution and the noise level. However, in ourexperiments the parameters are fixed for all our experiments.The image patches in BM3D are square 3 × 3. The group size islimited by 39 patches. The step size between the neighbouringpatches is equal to 2. The discrete cosine (for patches) and Haar(for the group length) transforms are used for 3D group dataprocessing in BM3D.

The parameters defining the iterations of the algorithm areas follows: γ1 = 1/χ ; tha = 2.0; thϕ = 2.0. The number of theiterations is fixed to 30.

For our experiments we use MATLAB R2016b and the com-puter with the processor Intel(R) Core(TM) i7-3770 CPU @ 3.40GHz, 32 Gb RAM.

The computation complexity of the algorithm is character-ized by the time required for processing. For 30 iterations,L = 10 and 64 × 64 images, zero-padded to 4992 × 4992, thistime is equal 2150 sec.

We make publicly available the MATLAB demo-codeshttp://www.cs.tut.fi/sgn/imaging/sparse of the developedSR-SPAR algorithm, which can be used to reproduce the exper-iments presented in this paper as well as for further tests.

3. RESULTS AND DISCUSSION

A. Experimental setup

Experimental setup is presented in Fig. 2: laser LASOS GLK3250 TS with wavelength λ = 532 nm, digital 8-bit CMOScamera EVS 3264 × 2448 with pixel size ∆S = 1.4 µm, phaseonly reflective Spatial Light Modulator (SLM) Holoeye LETO1920 × 1080 with pixel size ∆SLM = 6.4 µm and fill factor 93%were utilized. Light beam emitted by the laser source changes

Page 5: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 5

its vertical polarization to horizontal in a system of two retar-dation plates λ/4 and λ/2, next, beam expands in two lensesL1 and L2 collimator and goes through the beamsplitter (BS)to the SLM, where light wavefront obtains desired phase retar-dation and propagates to the CMOS camera through the samebeamsplitter BS. Optical distance between SLM and CMOS wasequal 21.55 mm. In our system the phase object was modelledon the SLM. It is assumed in this case that we can exactly knowits phase distribution. The developed algorithm has been testedin numerical experiments on two phase objects with invariantamplitude, B0 = 10, and phase images "cameraman" and logo"TUT", see Figs. 3–4, middle images in the top rows.

In our experiments both the phase object and phase modula-tion masks are implemented on SLM. Thus, the object’s pixelssize is equal to SLM’s pixel size, ∆o = ∆SLM. The size of the ob-ject in pixels is 64 × 64 pixels (SLM’s pixels). In modelling weuse the parameters corresponding to the ones used in physicalexperiments: propagation distance z = 21.55 mm, ∆S = 1.4 µm,∆SLM = 4 × ∆S, ∆o = ∆SLM.

Our objects and sensor are zero-padded to 4992× 4992 pix-els, i.e. N = 4992. It is the largest dimension allowed by RAMof our computer. We produced experiments with the super-resolution factor rs = 1, 2, 3, 4. In what follows we show theresults only for rs = 1 (pixel resolution) and rs = 4 (sub-pixelresolution), then the computational pixels are of the size from∆c = 1.4 µm to ∆c = 0.35 µm.

The propagation distance z = 21.55 mm is minimal for ouroptical setup restricted by the used reflective SLM. For rs = 1the distance zmax in Eq. (10) is zmax = 18.4 mm. This upperbound is smaller than the used distance z = 21.55 mm but thedifference between these parameters is not large. Thus, we canassume that at least for the pixel resolution the theoretical re-quirement (Eq. (10)) is approximately fulfilled.

B. Simulation experiments

For simulation of the observations z̃s we take the true phaseimage of the object 64 × 64, and assume that it is created on theSLM. Therefore its pixel size ∆o is equal to ∆SLM. As long as ourcomputational pixels ∆c are smaller than ∆SLM, we do phaseobject upsampling by a factor rSLM = ∆SLM/∆c. Next we do zero-padding of this upsampled object and add phase masks Ms toits phase. Next these wavefronts are propagated to the CMOScamera, and squared absolute values of the propagation resultsare observations z̃s, which correspond to pixels size ∆c. To getrealistic observations we average intensities of joint pixels inareas rs × rs on whole surface of the z̃s, where rs = ∆S/∆c, as aresult we have ∆S-wise observations z̃s.

In what follows we show the results for nearly noiselessobservations with the Poissonian scale parameter χ = 1000.Demonstration of how the SPAR algorithm suppresses the Pois-sonian noise can be seen in [22], where it is done for the pixelresolution and FT propagation model. Similar strong filteringcan be demonstrated for the super-resolution by SR-SPAR.

B.1. Pixel resolution

In Figs. 3–4 we present results for the pixel-resolution case, i.e.the computational and sensor pixels have equal sizes: ∆c =∆S, ∆o = ∆SLM, and ∆SLM = 4 · ∆S, M = 8. The later meansthat for masks we use the SLM’s superpixel with M = 8.

It follows from the figures, that SR-SPAR demonstrates per-fect results in phase reconstruction and negligible errors in am-plitude reconstructions. We can note also that the amplitude er-rors are definitely connected with the phase variations. The am-

Reconstructed phase

0.5

1

1.5Original phase

0.5

1

1.5Reconstructed amplitude

9.96

9.98

10

10.02

10.04

50 100 150 200 250Pixels

0

0.5

1

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

50 100 150 200 250Pixels

9.9

9.95

10

10.05

10.1

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 3. Reconstructions for simulated phase image “camera-man”, pixel-resolution,rs = 1. Top line: phases reconstructedand original, left and middle images, respectively, and recon-structed amplitude –right image. Bottom line - phase (left) andamplitude (right) longitudinal cross–sections. RMSEphase =0.00253.

Reconstructed phase

0

0.5

1

1.5

Original phase

0

0.5

1

1.5

Reconstructed amplitude

9.99

10

10.01

50 100 150 200 250Pixels

0

0.5

1

1.5

2

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

50 100 150 200 250Pixels

9.9

9.95

10

10.05

10.1

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 4. Reconstructions for simulated phase image logo “TUT”,pixel-resolution, rs = 1. Top line: reconstructed and original, leftand middle images, respectively, and reconstructed amplitude–right image. Bottom line - phase (left) and amplitude (right)longitudinal cross–sections. RMSEphase = 0.000852.

plitude reconstructions have clear seen squares forming 8 × 8grids. These squares are mapping of the SLM’s superpixel.Each of these squares has a size 8 × 8 SLM’s pixels. Thus, thetotal object size is 64 × 64 in SLM’s pixels and 256 × 256 in thecomputational pixels ∆c.

In order to optimize the accuracy of reconstructions, westudied dependence of Root-Mean-Square Error (RMSE) withrespect to the following four parameters: number of observa-tions L, number of iterations T, standard deviation of the ran-dom phase in the phase modulation masks and size of theSLM’s superpixel M. In the following we show these resultscalculated for the phase reconstructions and for the cameramanphase test-image only.

More or less similar results have been observed for ”TUT”phase test-images and various values of the super-resolutionfactor rs.

The dependence on the number of observations is presentedin Fig. 5. It is a decreasing function taking a steady-state valuestarting from L = 10. We take this value of L for our simulation.

Page 6: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 6

Number of observations

2 4 6 8 10 12 14 16 18 20

RM

SE

# 10-3

2

4

6

8

10

Phase RMSE dependence from number of observations

Fig. 5. Phase RMSE dependence from number of observationsL.

However, for processing of experimental data we take a largervalue L = 20.

Masks standart deviation, rad

0 0.2 0.4 0.6 0.8 1 1.2 1.4 1.6

RM

SE

0.15

0.2

0.25

0.3

0.35Phase RMSE dependence from mask standart deviation

Fig. 6. Phase RMSE dependence from standard deviation ofzero-mean Gaussian random mask.

The standard deviation (STD) of the random zero-meanGaussian phase masks is a crucial parameter for reconstructionbecause it influences both the accuracy and converges of the al-gorithm. Figure 6 illustrates RMSE as a function of STD. Thecurve takes a minimal value at STD = 0.3π, accepted for ourexperiments.

1 5 10 15 20 25 30 35Mask supepixel size, pix

0

0.01

0.02

0.03

0.04

0.05

RM

SE

Phase RMSE dependence from mask superpixel size

Fig. 7. Phase RMSE dependence from SLMs superpixel sizeM.

Another significant parameter is the SLM superpixel size M.Figure 7 clearly indicates that the best value is M = 1. We ac-cept M = 1 for simulation tests. However, it was observedthat for processing of experimental data the larger value of Mprovides better results, we take M = 8. It happens due to thecross-talk effects between the adjacent SLM pixels [39]. As itis seen from Fig. 7 M = 8 does not seriously increases RMSEvalue as compared with M = 1.

Figure 8 illustrates the performance of the algorithm with

Iteration number

0 5 10 15 20 25 30

RM

SE

0

0.1

0.2

0.3

0.4

0.5Phase RMSE dependence from number of iterations

Fig. 8. Phase RMSE dependence from iteration number T.

respect to the iteration number T. The RMSE curve is mono-tonically decreasing with the steady state values achieved afterabout 15 iterations. Nevertheless, for experimental data we takeT = 30.

B.2. Subpixel super-resolution

In this subsection we show the results for super-resolution per-formance of SR-SPAR with the super-resolution factor rs = 4,i.e. ∆S = rs · ∆c, ∆SLM = 4 · ∆S, ∆o = ∆SLM and ∆c = 0.35 µm.This computational pixel value is smaller than the wavelength,λ = 532 nm. In this way we arrive to the sub-wavelength reso-lution. Figures 9–10 demonstrate nearly perfect reconstructions,which are similar to the reconstructions obtained for the pixel-resolution case.

Reconstructed phase

0.5

1

1.5Original phase

0.5

1

1.5Reconstructed amplitude

9.95

10

10.05

200 400 600 800 1000Pixels

0

0.2

0.4

0.6

0.8

1

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

200 400 600 800 1000Pixels

9.9

9.95

10

10.05

10.1

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 9. Reconstructions for simulated cameraman phase im-age. Subpixel super-resolution, rs = 4. Top line, reconstructedand original phase images, left and middle images, respec-tively, and reconstructed amplitude –right image. Bottom line- phase (left) and amplitude (right) longitudinal cross–sections.RMSEphase = 0.00228.

C. Physical experiments

The parameters used in the physical experiments are exactlycorrespond to those for simulation. The phase objects “camera-man” and “TUT” are modelled by SLM with the phase ranges[0, π

5 ] and [0, π6 ], respectively. The computational pixels have

sizes ∆c = 1.4 and 0.35 µm for pixel- and sub–pixel resolution, re-spectively.

C.1. Pixel resolution

The pixel–resolution images are shown in Figs. 11–12. Despitea large distance z between the object and sensor planes and, re-

Page 7: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 7

Reconstructed phase

0

0.5

1

1.5

Original phase

0

0.5

1

1.5

Reconstructed amplitude

9.95

10

10.05

200 400 600 800 1000Pixels

0

0.5

1

1.5

2

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

200 400 600 800 1000Pixels

9.9

9.95

10

10.05

10.1A

mpl

itude

, a.u

.Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 10. Reconstructions for simulated phase image logo“TUT”, subpixel super-resolution. Top line, phases: reconstructedand original, left and middle images, respectively, and recon-structed amplitude –right image. Bottom line - phase (left) andamplitude (right) longitudinal cross–sections. RMSEphase =0.000592.

spectively, strong diffraction effects in the registered holograms,SR-SPAR gives reliable results with well seen ”cameraman” im-age in Fig. 11. The cameraman shape is also well seen in the am-plitude reconstruction. It seems, that it happens due to a non-ideality of SLM with some leakage from phase to amplitudemodulation. The corresponding phase cross-section demon-strates that the reconstruction follows the respective changesin the original image. The reconstructions for the “TUT ”phaseobject are also quite acceptable as it is seen from the phase cross-section showing exactly to location of the peaks as well as theirmagnitudes. We can also see the features of the phase object inthe amplitude reconstruction. Again at least partially, it can beaddressed to the non-ideality of SLM. The blurred shape of thereconstructions as compared with the reconstructions obtainedin the simulation tests is due to a quite approximate modelingof wavefront propagation imbedded in SR-SPAR and definedby Eqs. (5-6).

C.2. Subpixel super-resolution

The results for sub-pixel resolution are presented in Figs. 13 –14. Again we can see quite good reconstructions for both “cam-eraman” and “TUT” phase objects. The comments on ampli-tude reconstructions are identical to those given above for thepixel resolution case. It is interesting, that the subpixel super-resolution reconstructions look very similar to the reconstruc-tions obtained for the pixel resolution. The only difference wellseen in the cross-sections is that the sub-pixel resolution imagesare smoother than the pixel resolution ones. The blurred char-acter of the reconstructions can be addressed to a quite approx-imate modeling of wavefront propagation by Eqs. (5-6).

For the super-resolution ∆c = ∆S/4 and provided fixedN = 4992, the size of the image support in computational pix-els, zmax in Eq. (10) takes value zmax = 18.4/16 mm. It is muchless than the used z = 21.55 mm, thus the condition Eq. (10)is severely violated. It can be one of the cause of discrepancybetween the reality and the model used for wavefront propaga-tion.

Reconstructed phase

0

0.5

1Original phase

0

0.2

0.4

0.6Reconstructed amplitude

8

10

12

14

50 100 150 200 250Pixels

0

0.2

0.4

0.6

0.8

Pha

se, r

ad

Phase cross-sectionOriginal phaseReconstructed phase

50 100 150 200 250Pixels

8

10

12

14

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 11. Reconstructions for experimental data, phase object“cameraman”, pixel-resolution. Top line, phases: reconstructedand original, left and middle images, respectively, and recon-structed amplitude –right image. Bottom line - phase (left) andamplitude (right) longitudinal cross–sections.

Reconstructed phase

0

0.1

0.2

0.3

0.4

Original phase

0

0.1

0.2

0.3

0.4

Reconstructed amplitude

6

8

10

12

14

50 100 150 200 250Pixels

0

0.2

0.4

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

50 100 150 200 250Pixels

6

8

10

12

14

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 12. Reconstructions for experimental data, phase objectlogo “TUT”, pixel-resolution. Top line, phases: reconstructedand original, left and middle images, respectively, and recon-structed amplitude –right image. Bottom line - phase (left) andamplitude (right) longitudinal cross–sections.

4. CONCLUSION

Computational subpixel super-resolution phase retrieval is con-sidered for phase-coded intensity observations. The iterativealgorithm is proposed for the lensless setup with the scalarRayleigh-Sommerfeld model for wavefront propagation. Oneof the essential instruments of the algorithm is a sparsity hy-pothesis applied to both phase and amplitude. The efficiency ofthe algorithm is confirmed by simulation tests and experiments.It is shown that the high level super-resolution can be achievedwith the super-resolution factor up to 4, i.e. the pixel size ofthe reconstructed object in 4 times smaller that the pixel size ofthe sensor. In comparison with the wavelength the achievedsuper-resolution is upto two thirds of the wavelength.

The following further work is planned. First, we are goingto use a transition (not reflective) SLM, which allows to makepropagation distance z much smaller and in this way to fulfillthe condition Eq. (10). Second, we are going to use in our ex-periments a physical phase test-object, what will allow to ob-serve details with the resolution smaller than the SLM’s pixelsize. Third, in the algorithm development, an adaptive correc-tion of the transfer function for wavefront propagation will be

Page 8: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 8

Reconstructed phase

0

0.5

1Original phase

0.2

0.4

0.6Reconstructed amplitude

8

10

12

14

200 400 600 800 1000Pixels

0

0.2

0.4

0.6

0.8

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

200 400 600 800 1000Pixels

8

10

12

14A

mpl

itude

, a.u

.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 13. Reconstructions for experimental data, phase object“cameraman”, subpixel super-resolution. Top line, phases: re-constructed and original, left and middle images, respectively,and reconstructed amplitude –right image. Bottom line - phase(left) and amplitude (right) longitudinal cross–sections.

Reconstructed phase

0

0.1

0.2

0.3

0.4

Original phase

0

0.1

0.2

0.3

0.4

Reconstructed amplitude

6

8

10

12

14

200 400 600 800 1000Pixels

0

0.2

0.4

Pha

se, r

ad

Phase cross-section

Original phaseReconstructed phase

200 400 600 800 1000Pixels

6

8

10

12

14

Am

plitu

de, a

.u.

Amplitude cross-section

Original amplitudeReconstructed amplitude

Fig. 14. Reconstructions for experimental data, phase objectlogo “TUT”, subpixel super-resolution. Top line, phases: recon-structed and original, left and middle images, respectively,and reconstructed amplitude –right image. Bottom line - phase(left) and amplitude (right) longitudinal cross–sections.

designed.

FUNDING

This work is supported by Academy of Finland, project no.287150, 2015-2019, Russian Ministry of Education and Science(project within the state mission for institutions of higher edu-cation, agreement 3.1893.2017/PP) and Horizon 2020 TWINN-2015, grant 687328 - HOLO.

REFERENCES

1. R. K. Tyson, “Principles of Adaptive Optics”, 4rd ed., CRCPress (2014).

2. L. Wang and H. Wu, “Biomedical Optics: Principles andImaging,” John Wiley & Sons, Inc. (2007).

3. B. Kress and P. Meyrueis, “Applied Digital Optics: FromMicro-Optics to Nanooptics,” John Wiley & Sons Inc.(2009).

4. E. Can, and X. Li, M. Soltanolkotabi, “Phase Retrieval fromCoded Diffraction Patterns,” Appl. Comput. HarmonicAnal., 39, 177–299 (2015).

5. D. L. Misell, “An examination of an iterative method forthe solution of the phase problem in optics and electronoptics: I. Test calculations,” J. Phys. D Appl. Phys. 6, 2200–2216 (1973).

6. W. O. Saxton, “Correction of artefacts in linear and non-linear high resolution electron micrographs,” J. Microsc.Spectrosc. Electron. 5, 661–670 (1980).

7. G. Pedrini, W. Osten, and Y. Zhang, “Wave-front recon-struction from a sequence of interferograms recorded atdifferent planes,” Opt. Lett. 30, 833-835 (2005).

8. P. Almoro, G. Pedrini and W. Osten, “Complete wavefrontreconstruction using sequential intensity measurements ofa volume speckle field,” Appl. Opt. 45, 8596-8605 (2006).

9. P. Almoro, A. M. S. Maallo, and S. Hanson, “Fast-convergent algorithm for speckle-based phase retrievaland a design for dynamic wavefront sensing,” Appl. Opt.48, 1485-1493 (2009).

10. C. Kohler, F. Zhang, and W. Osten, “Characterization ofa spatial light modulator and its application in phase re-trieval,” Appl. Opt. 48, 4003–4008 (2009).

11. L. Camacho, V. Micy, Z. Zalevsky, and J. Garcha, “Quanti-tative phase microscopy using defocussing by means ofa spatial light modulator,” Opt. Express 18, 6755–6766(2010).

12. P. Almoro, G. Pedrini, P. N. Gundu, W. Osten, S. G.Hanson, ”Enhanced wavefront reconstruction by randomphase modulation with a phase diffuser,” Opt. Laser. Eng.49, 252-257 (2011).

13. E. J. Candès, X. Lib, M. Soltanolkotabi, ”Phase retrievalfrom coded diffraction patterns,” Appl. and Comp. Harm.Anal. 9, 277–299 (2015).

14. P. Almoro, D. P. Quang Duc, I. Serrano-Garcia David, H.Satoshi, H. Yoshio, T. Mitsuo, and Y. Toyohiko, “Enhancedintensity variation for multiple-plane phase retrieval us-ing spatial light modulator as convenient tunable diffuser,”Opt. Lett. 41, 2161-2164 (2016).

15. J. W. Goodman, Introduction to Fourier optics, 3rd Ed.Roberts & Company, Englewood, (2005).

16. R. W. Gerchberg and W. O. Saxton, “A practical algorithmfor the determination of phase from image and diffractionplane pictures,” Optik 35, 237–246 (1972).

17. J. R. Fienup, “Phase retrieval algorithms: a comparison,”Appl. Opt. 21, 2758–2769 (1982).

18. C. Guo, S. Liu, Shi, J. T. Sheridan, ”Iterative phase re-trieval algorithms. I: optimization,” Appl. Opt 54, 4698-4708 (2015).

19. Y. Shechtman, Y. C. Eldar, O. Cohen, H. N. Chapman, J.Miao, and M. Segev, ”Phase retrieval with application tooptical imaging: a contemporary overview,” IEEE SignalProcess. Mag. 87-109 (2015).

20. E. J. Candès, X. Li and M. Soltanolkotabi, ”Phase retrievalvia Wirtinger flow: theory and algorithms,” IEEE Trans.Inf. Theory, 61, 1985–2007 (2015).

21. Y. Chen and E. J. Candès, ”Solving ran-dom quadratic systems of equations isnearly as easy as solving linear systems,”http://statweb.stanford.edu/~candes/papers/TruncatedWF.pdf(2015).

22. V. Katkovnik, “Phase retrieval from noisy data based onsparse approximation of object phase and amplitude,”http://www.cs.tut.fi/~lasip/DDT/pdfs/Single_column.pdf(2015).

23. Q. Lian , B. Shi, S. Chen, ”Transfer orthogonal sparsifying

Page 9: Computational super-resolution phase retrieval …...Research Article Vol. X, No. X / April 2016 / Optica 1 Computational super-resolution phase retrieval from multiple phase-coded

Research Article Vol. X, No. X / April 2016 / Optica 9

transform learning for phase retrieval,” Digit Signal Pro-cess. 62, 11–25 (2017).

24. A. Fannjiang, “Absolute uniqueness of phase retrievalwith random illumination,” Inverse Probl. 28, 075008(2012).

25. W. Bishara, U. Sikora, O. Mudanyali, T. W. Su, O. Yaglid-ere, S. Luckhart, and A. Ozcan, “Holographic pixel super-resolution in portable lensless on-chip microscopy using afiber-optic array,” Lab Chip 11, 1276–1279 (2011).

26. K. Guo, S. Dong, P. Nanda, and G. Zheng, “Optimiza-tion of sampling pattern and the design of Fourier ptycho-graphic illuminator,” Opt. Express 23, 6171–6180 (2015).

27. A. Greenbaum, W. Luo, B. Khademhosseinieh, T. W. Su,A. F. Coskun and A. Ozcan“Increased space-bandwidthproduct in pixel super-resolved lensfree on-chip mi-croscopy,” Sci. Rep. 3, 1717 (2013).

28. S. Gazit, A. Szameit, Y. C. Eldar, M. Segev, “Super-resolution and reconstruction of sparse sub-wavelengthimages,” Optics Express 17, 23920-23946 (2009).

29. J. Song, C. L. Swisher, H. Im, S. Jeong, D. Pathania,Y. Iwamoto, M. Pivovarov, R. Weissleder and H. Lee,”Sparsity-Based Pixel Super Resolution for Lens-Free Dig-ital In-line Holography,” Sci. Rep. 6, 24681 (2016).

30. V. Katkovnik and K. Egiazarian, “Sparse super-resolutionphase retrieval from phase-coded noisy intensity pat-terns,” http://www.cs.tut.fi/sgn/imaging/sparse/(2017).

31. V. Katkovnik and J. Astola, “ High-accuracy wavefieldreconstruction: decoupled inverse imaging with sparsemodeling of phase and amplitude, ” J. Opt. Soc. Am. A,29, 44 – 54 (2012).

32. V. Katkovnik and J. Astola, ”Sparse ptychographical coher-ent diffractive imaging from noisy measurements,” J. Opt.Soc. Am. A 30, 367-379 (2013).

33. K. Dabov, A. Foi, V. Katkovnik, and K. Egiazarian, “Imagedenoising by sparse 3D transform-domain collaborativefiltering,”IEEE Trans. Image Process. 16, 2080-2095 (2007).

34. A. Danielyan, V. Katkovnik, and K. Egiazarian, “BM3Dframes and variational image deblurring, ” IEEE Trans. Im-age Process. 21, 1715 – 1728 (2012).

35. N. Verrier and M. Atlan, “Off-axis digital hologram recon-struction: some practical considerations,” Appl. opt. 50,H136–H146 (2011).

36. F. Zhang, G. Pedrini, W. Osten, “ Reconstruction algorithmfor high-numerical-aperture holograms with diffraction-limited resolution”, Opt. Lett. 31, 1633–1635 (2006).

37. N. Petrov, V. Bespalov, and M. Volkov, “Phase retrieval ofTHz radiation using set of 2D spatial intensity measure-ments with different wavelengths,” SPIE OPTO, 82810J–82810J (2012).

38. J. M. Bioucas-Dias and G. Valadão, “Phase unwrappingvia graph cuts,” IEEE Trans. Image Process. 16, 698–709(2007).

39. P. Gemayel, B. Colicchio,A. Dieterlen,P. Ambs, “Cross-talk compensation of a spatial light modulator for itera-tive phase retrieval applications,” Appl. Opt. 55, 802-810(2016).