Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

8
Integral imaging with improved depth of field by use of amplitude-modulated microlens arrays Manuel Martı´nez-Corral, Bahram Javidi, Rau ´l Martı´nez-Cuenca, and Genaro Saavedra One of the main challenges in three-dimensional integral imaging is its limited depth of field. Such a limitation is imposed by diffraction, among other factors. The easiest way to improve the depth of field is by reducing the numerical aperture of the microlenses. However, such an improvement is obtained at the expense of an important deterioration in the spatial resolution. We propose a technique, which is novel in the context of integral imaging, for improving the depth of field with no deterioration of the spatial resolution. The technique, based on amplitude modulation of the array of phase elements, can substantially improve the figure of merit of the product of depth of the focus and the squared resolution. © 2004 Optical Society of America OCIS codes: 110.6880, 110.4190, 350.5730. 1. Introduction Integral imaging is a promising three-dimensional 3D imaging technique that provides autostereo- scopic images from a continuous viewpoint and does not require the use of any special glasses. 1–6 Al- though integral photography was first proposed by Lippmann in 1908, 1 there has been recent interest in it because of its application to 3D TV and visualiza- tion. 7 The term integral imaging was introduced to reflect the imaging and the image-processing nature for modern applications. 8,9 In an integral-imaging system, a two-dimensional 2D array of 2D elemen- tal images of a given surface 3D object is generated by a microlens array and recorded on a CCD. Any el- emental image shows a different perspective of the 3D object. The recorded 2D elemental images are displayed by an optical device, such as a LCD, in front of another microlens array to reconstruct the 3D im- age. Since its rebirth in 1997 by Okano et al., 10 many important theoretical and experimental contri- butions to integral imaging have been reported. 11–18 Three-dimensional integral-imaging systems still face some problems. One issue is the limited depth of field. In a typical scene objects exist at different depth positions. Since only a single plane is used to capture the images, it is not possible for all objects to be in focus. Then blurred images of objects, or parts of objects, that are out of focus are obtained. Al- though the depth of field of integral-imaging systems is influenced by many parameters related to both the capture and the display systems, it is apparent that to display a clear integral image of an axially elon- gated 3D object it is essential to capture sharp 2D elemental images of it. For this reason the bottle- neck of the depth of field in integral imaging is the limited depth of field of the microlens array used in the pickup stage. One could overcome this problem by reducing the numerical aperture NA of the lenses. However, such a reduction would produce as a collateral effect an important worsening of the lat- eral resolution of the capture stage and therefore of the spatial resolution of the overall integral-imaging system. In the past few years some new alternative techniques have been proposed to improve the depth of field of the integral-imaging systems. These methods are based on the synthesis of real and vir- tual image fields, 19 or on the use of lenses with non- uniform focal lengths and aperture sizes. 20,21 In this paper we propose a new method for produc- ing a significant enlargement of the depth of field of the integral-imaging pickup system. This enlarge- ment is not accompanied by a deterioration of spatial resolution. The method, whose main feature is its simplicity, is based on an adequate binary modula- M. Martinez and B. Javidi [email protected] are with the Department of Electrical and Computer Engineering, Univer- sity of Connecticut, Storrs, Connecticut 06269-1157. R. Martinez- Cuenca and G. Saavedra are with the Department of Optics, University of Valencia, E-46100 Burjassot, Spain. Received 14 November 2003; revised manuscript received 14 June 2004; accepted 6 July 2004. 0003-693504315806-08$15.000 © 2004 Optical Society of America 5806 APPLIED OPTICS Vol. 43, No. 31 1 November 2004

Transcript of Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

Page 1: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

Iu

M

1

I�sntLitrfstae3doamb

tsCU

J

5

ntegral imaging with improved depth of field byse of amplitude-modulated microlens arrays

anuel Martınez-Corral, Bahram Javidi, Raul Martınez-Cuenca, and Genaro Saavedra

One of the main challenges in three-dimensional integral imaging is its limited depth of field. Such alimitation is imposed by diffraction, among other factors. The easiest way to improve the depth of fieldis by reducing the numerical aperture of the microlenses. However, such an improvement is obtainedat the expense of an important deterioration in the spatial resolution. We propose a technique, whichis novel in the context of integral imaging, for improving the depth of field with no deterioration of thespatial resolution. The technique, based on amplitude modulation of the array of phase elements, cansubstantially improve the figure of merit of the product of depth of the focus and the squared resolution.© 2004 Optical Society of America

OCIS codes: 110.6880, 110.4190, 350.5730.

fodcbotictgenltblaetstomtu

itmr

. Introduction

ntegral imaging is a promising three-dimensional3D� imaging technique that provides autostereo-copic images from a continuous viewpoint and doesot require the use of any special glasses.1–6 Al-hough integral photography was first proposed byippmann in 1908,1 there has been recent interest in

t because of its application to 3D TV and visualiza-ion.7 The term integral imaging was introduced toeflect the imaging and the image-processing natureor modern applications.8,9 In an integral-imagingystem, a two-dimensional �2D� array of 2D elemen-al images of a given surface 3D object is generated bymicrolens array and recorded on a CCD. Any el-

mental image shows a different perspective of theD object. The recorded 2D elemental images areisplayed by an optical device, such as a LCD, in frontf another microlens array to reconstruct the 3D im-ge. Since its rebirth in 1997 by Okano et al.,10

any important theoretical and experimental contri-utions to integral imaging have been reported.11–18

Three-dimensional integral-imaging systems still

M. Martinez and B. Javidi �[email protected]� are withhe Department of Electrical and Computer Engineering, Univer-ity of Connecticut, Storrs, Connecticut 06269-1157. R. Martinez-uenca and G. Saavedra are with the Department of Optics,niversity of Valencia, E-46100 Burjassot, Spain.Received 14 November 2003; revised manuscript received 14

une 2004; accepted 6 July 2004.0003-6935�04�315806-08$15.00�0

s© 2004 Optical Society of America

806 APPLIED OPTICS � Vol. 43, No. 31 � 1 November 2004

ace some problems. One issue is the limited depthf field. In a typical scene objects exist at differentepth positions. Since only a single plane is used toapture the images, it is not possible for all objects toe in focus. Then blurred images of objects, or partsf objects, that are out of focus are obtained. Al-hough the depth of field of integral-imaging systemss influenced by many parameters �related to both theapture and the display systems�, it is apparent thato display a clear integral image of an axially elon-ated 3D object it is essential to capture sharp 2Dlemental images of it. For this reason the bottle-eck of the depth of field in integral imaging is the

imited depth of field of the microlens array used inhe pickup stage. One could overcome this problemy reducing the numerical aperture �NA� of theenses. However, such a reduction would produce ascollateral effect an important worsening of the lat-

ral resolution of the capture stage and therefore ofhe spatial resolution of the overall integral-imagingystem. In the past few years some new alternativeechniques have been proposed to improve the depthf field of the integral-imaging systems. Theseethods are based on the synthesis of real and vir-

ual image fields,19 or on the use of lenses with non-niform focal lengths and aperture sizes.20,21

In this paper we propose a new method for produc-ng a significant enlargement of the depth of field ofhe integral-imaging pickup system. This enlarge-ent is not accompanied by a deterioration of spatial

esolution. The method, whose main feature is its

implicity, is based on an adequate binary modula-
Page 2: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

tl

aifaotsiTglsodbWco

2

TctaNffSdmdiqll2sttm

tiofiacto

tp

wrd

pwdgg

wmtMpi

Tmttdbtvs

em

Fiopfil

ion of the amplitude transmittance of the micro-enses.

To present our technique, we start by carrying outtheoretical analysis of the pickup stage of the

ntegral-imaging systems in terms of the scalar dif-raction theory. This analysis explicitly takes intoccount, for the first time we believe, the fact that thebject is a surface object. This assumption allows uso develop a set of equations �Eqs. �1�–�7�� that con-titutes a strict description of the diffractional behav-or of the pickup of an integral-imaging system.his analysis shows us that, conversely to what isenerally assumed, integral-imaging systems are notinear and shift invariant, and therefore it is not validtricto senso to define either a point-spread functionr an optical transfer function. Our second step is toesign an adequate amplitude modulator that shoulde applied to any element of the microlens array.e have performed a numerical simulation with a

omputer-synthesized object to show that the depthf focus shows important improvement.

. Theoretical Analysis of the Capture Stage

o discuss the method we start by describing theapture stage from the point of view of diffractionheory. Note that since the microlens arrays gener-lly used in integral-imaging experiments are of lowA �NA � 0.2�, the analysis can be accurately per-

ormed within the frame of the paraxial scalar dif-raction theory. In Fig. 1 we show the capture setup.patial coordinates are denoted x � �x, y� and z forirections transverse and parallel to the system’sain optical axis. We consider a surface object un-

er spatially incoherent illumination. For simplic-ty we assume, in the first stage of our formulation, auasi-monochromatic illumination with mean wave-ength �. Light emitted by the surface object is col-ected by the microlens array to form a collection ofD elemental aerial images. The images are in theo-called aerial pickup plane, which is set at a dis-ance g from the microlens array. The reference andhe aerial pickup plane are conjugated through the

ig. 1. Schematic, not to scale, of the capture setup of a 3Dntegral-imaging system. The points of the surface object O�x, z�ut of the reference plane produce blurred images in the aerialickup plane and therefore on the CCD. In the relay system theeld lens collects the rays proceeding from the outermost micro-

enses, and the camera lens projects the images onto the CCD.

icrolenses so that distances a and g are related by a

he lens law, 1�a � 1�g 1�f � 0. Any elementalmage shows a different perspective of the surfacebject. In our scheme a relay system, composed by aeld lens and a camera lens, is used to image theerial images into the pickup device �usually a CCDamera�. The lateral magnification of the relay sys-em is adjusted so that the size of the collection arrayf the elemental images matches the CCD.The intensity distribution of incoherent light scat-

ered by the object can be represented by the real andositive function:

O�x, z� � R�x�� z � f �x��, (1)

here function R�x� accounts for the object intensityeflectivity, whereas f �x� z � 0 is the function thatescribes the surface.We consider now the light scattered at an arbitrary

oint �x, z� of the surface object. It is straightfor-ard, by application in the cascade of paraxial scalariffraction equations, to find that the intensity at aiven point x� � �x�, y�� of the aerial pickup plane isiven by

H��x�; x, z� � ��m

exp�i

��a � z��mp

� x�2� � Pz�x0�exp�i2 x0

�x� � �Mz�mp � x� � mp�

�g �d2x0�2

,

(2)

here m � �m, n� accounts for the indices of theicrolenses in the �x, y� directions and p stands for

he constant pitch of the microlens array. In Eq. �2�

z � g��a z� is a magnification factor that de-ends on depth coordinate z. The so-called general-zed pupil function is

Pz�x0� � p�x0�exp�i

� � 1a � z

�1a�x0�2� . (3)

his function accounts for the pupil function of theicrolenses p�x0� together with the phase modula-

ion due to the defocus errors. Note that in principlehe main focus of our research is not the intensityistribution at the aerial pickup plane but the distri-ution at the pickup-device plane. Note, however,hat since such a distribution is a uniformly scaledersion of the one in Eq. �2�, it is correct to base ourtudy on such an equation.Assuming nonsignificant overlapping between the

lemental diffraction spots provided by the differenticrolenses, Eq. �2� can be rewritten in a quite good

pproximation as the 2D convolution between the

1 November 2004 � Vol. 43, No. 31 � APPLIED OPTICS 5807

Page 3: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

pP

fdwv

NM

H

T

AidsfaaCaitf

aentttd

w

Wwe

3M

Arcpnr�mtaeAextrrwfiprewwmttcc

FHT�

5

roperly scaled and shifted 2D Fourier transform ofz�x0� and a sampling function, that is,

H��x�; x, z� � �Pz�x� � Mzx�g �2

� �m

�x� � mp�1 � Mz��. (4)

Let us consider now the overall light proceedingrom the surface object. In this case the intensityistribution in the pickup plane is obtained as aeighted superposition of the diffraction spots pro-ided by any point of the surface object, namely,

I��x�� � � R�x�� z � f �x��H��x�; x, z�d2xdz

� � R�x� H��x�; x, z

� f �x��d2x. (5)

ote that function H���� explicitly depends on x�

zx, that is,

��x�; x, z� � H��x� � Mzx; 0, z� H��x� � Mzx; z�.(6)

hen Eq. �5� can be rewritten as

I��x�� � � R�x�H��x� � Mzx; z � f �x��d2x. (7)

lthough Eq. �7� seems to represent a 2D convolution,t does not. This is because function Pz�x0� stronglyepends on the axial position of the correspondingurface points. In other words, function H���� is dif-erent for different values of z. Besides, factor Mzlso depends on z, and therefore parts of the object atdifferent depth are magnified in a different way.onsequently the impulse response is different atny depth. This fact implies that the pickup systems not linear and shift invariant. Therefore neitherhe point-spread function nor the optical transferunction could be rigorously defined.

As seen above, the acquired image is composed ofn array of elemental images of the surface object,ach obtained from a different viewpoint. Let usow focus on the elemental image produced by one ofhe microlenses, for example, the one in the center ofhe microlens array. �This selection does not sub-ract any generality from our study.� The intensityistribution of such an elemental image is given by

I�0�x�� � � R�x�H�

0�x� � Mzx; z�d2x, (8)

here

0 ˜ x� 2

H� �x�; z� � �Pz��g� . (9)

808 APPLIED OPTICS � Vol. 43, No. 31 � 1 November 2004

e assume that the pupil of each microlens is a circleith diameter �. In such a case it is convenient toxpress Eq. �9� in cylindrical coordinates:

H�0�r, z� � ��

0

��2

p�r0�exp�i

za�a � z�

r02�

� J0�2 rr0

�gr0dr0�2

. (10)

. Influence of the Amplitude Modulation of theicrolenses

s stated above, the function in Eq. �10� does notepresent the microlens 3D PSF. Conversely it ac-ounts for the intensity responses, at the pickuplane, to a point source that is axially displaced in theeighborhood of the reference plane. In Fig. 2 weepresent in a 3D plot of a meridian section of Eq.10�. The parameters for the calculation are � � 2.0m, f � 5.0 mm, � � 0.5 �m, and a � 100 mm. Note

hat, because of the low value for the NA lens, thexial extent of H0 is much higher than the lateralxtent. In the z � 0 section we can recognize theiry disk pattern, whose extent determines the lat-ral resolution of the system. Along the optical axis,� 0, we find a sinc2 modulation whose extent de-

ermines the depth of field. Accordingly the lateralesolution of an imaging system is determined by theadius of the first zero ring of H�

0�r, 0�. From Fig. 2e find that in this case the resolution limit, as de-ned by Rayleigh, is 1.61 �m, if measured in theickup plane, or 30.6 �m, if evaluated in theeference-object plane. The depth of field is usuallyvaluated by means of the so-called Rayleigh range,hich is defined as the extent of the axial interval inhich H�

0�0, z� is higher than �2�2 times its maxi-um value.22 In this case the Rayleigh range is3.3 mm � z � 3.1 mm. Note that the pixel size of

he capture device is a factor that strongly influenceshe depth of focus and resolution. However, in ouralculations we considered that the pixels are suffi-iently fine.

ig. 2. Three-dimensional plot of the meridian section of function�0�r, z� when the pupil of the microlens is a circle of diameter �.

he parameters for the calculation are � � 2.0 mm, f � 5.0 mm,� 0.5 �m, and a � 100 mm.

To illustrate the limitations in the depth of field of

Page 4: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

aiiamttotz�ppacw

aifairbdtltici

4

TclprowSpam

nhiosoaa0

Feir

Fcnm

n integral-imaging system, we performed a numer-cal experiment in which we obtained the elementalmages of a computer-synthesized object. Since theim of the experiment was to appreciate the improve-ent in the depth of field, we selected as the object

he Snellen E tumbling chart, which is usually usedo grade resolution and defocus errors in the visualptics. In the experiment the E patterns are posi-ioned side by side and are longitudinally located at1 � 10.0 mm, z2 � 5.0 mm, z3 � �4.6 mm, z4 �8.3 mm as depicted in Fig. 3. Note that the axialositions were not symmetric about the referencelane but correspond to the same amount of defocuss defined in terms of the well-known defocus coeffi-ient �20 � z�2�2�a�a z�.23 The elemental imagesere calculated according to Eq. �7�.The images captured from nine different views

re shown in Fig. 4. In Fig. 5 we show an enlargedmage of the central element m � �0, 0�. It is clearrom Fig. 5 that the images of the E patterns in z1nd z4 are highly blurred. Note that, since themaging system is not telecentric,24 the images cor-esponding to planes with the same modulus of �20ut a different sign are different. This is due to theifferent scale of defocused images. Because ofhis effect, the elemental image of the E patternsocated at z1 is much more blurred than the elemen-al image corresponding to the E pattern at z4. Its noticeable that in the case of the pattern at z1 onean hardly distinguish the original orientation of En the elemental image.

. Binary Amplitude Modulation Technique

he easiest way to improve the depth of field of theapture setup is by reducing the NA of the micro-enses. However, such an improvement is accom-anied by a proportional deterioration of lateralesolution. This problem can be overcome by usef amplitude-modulation techniques. Specificallye propose the use of binary amplitude modulators.uch a kind of modulator has been successfully ap-lied to improving the performance of other 3D im-ging techniques such as confocal microscopy25 or

ig. 3. Schematic, not to scale, of the integral-imaging numericalxperiment. The size of the legs of the charts used in our exper-ments is � � 51 �m, which is approximately twice the Rayleighesolution limit.

ultiphoton scanning microscopy.26 The tech-

ique, which is simple albeit surprisingly effective,as never been proposed in the context of integral

maging. It consists of obscuring the central partf each microlens. Such an obscuration allows theecondary Huygens wavelets proceeding from theuter part of the lenses to interfere constructively inn enlarged axial range. Then, by simply placingn opaque circular mask of diameter D � � �with� � 1� just behind each microlens, one can

ig. 4. Two-dimensional elemental images of the tumbling chartaptured from nine different views. At any element image we doot show the entire field of view but only a portion of 0.4 mm � 0.4m centered at the corresponding optical axis.

Fig. 5. Enlarged view of the central elemental image in Fig. 4.

1 November 2004 � Vol. 43, No. 31 � APPLIED OPTICS 5809

Page 5: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

iitIoHevstrttho

Holrrsbpcrsrbc

eaaFiFtmosn

tpfhfo

FHmi�

Fad0

FNs

5

ncrease the focal depth of the microlens array. Its known that the higher the value of the obscura-ion ratio , the broader the axial intensity spot.n an ideal case, one could obtain an infinite depthf focus by approaching the value of to the unity.owever, such a situation is not convenient from an

xperimental point of view, because the higher thealue of the smaller the light efficiency of theystem. On the other hand, if one works with onlyhe outermost part of the lenses, the optical aber-ations of the system dramatically increase. Forhese reasons we propose using the binary modula-or of obscuration ratio � �2�2. This modulatoras a light efficiency of 50% and doubles the depthf focus of the system.In Fig. 6 we plotted a meridian section of function

�0�r, z� for the case of amplitude modulation with

bscuration ratio � �2�2. In this case the Ray-eigh resolution limit is 22.3 �m �as evaluated in theeference plane�, whereas the depth of field interval is6.8 mm � z � � 6.0 mm. If we compare these

esults with those obtained with the nonmodulatedetup �see Fig. 2�, we find that the depth of field haseen doubled and that the 2D density of resolvedoints has been increased by a factor of 1.85. Wean evaluate the power of our technique in terms of aecently defined figure of merit for integral-imagingystems: the product of the depth of focus and theesolution square �PDRS�.19 We find that the use ofinary amplitude modulation allows the PDRS to in-rease by a factor of 3.7.

Also in this case we have performed the numericalxperiment with the same Snellen E tumbling charts in Section 3. The nine elemental images whenmplitude-modulated lenslets are used are shown inig. 7. The enlarged view of the central elemental

mage m � �0, 0� is shown in Fig. 8. By comparingigs. 5 and 8, one observes noticeable improvement in

he depth of field provided by the amplitude-odulation, phase-elements method. Note, on the

ther hand, that the images of objects at z2 and z3 arelightly more blurred than those obtained with the

ig. 6. Three-dimensional plot of the meridian section of function�

0�r, z� when the amplitude transmittance of the microlens isodulated with a binary mask of obscuration ratio � �2�2. As

n the previous experiment the parameters for the calculation are� 2.0 mm, f � 5.0 mm, � � 0.5 �m, and a � 100 mm.

onmodulated architecture. This fact seems to con- a

810 APPLIED OPTICS � Vol. 43, No. 31 � 1 November 2004

radict the statement that the binary modulation im-roves the lateral resolution, as defined by Rayleigh,or objects placed at any depth z. Take into account,owever, that the Rayleigh resolution limit is definedor point objects, and therefore it does not hold in casef more elaborate objects. In such a case the use of

ig. 7. Two-dimensional elemental images captured with themplitude-modulated microlens array. At any element image weo not show the entire field of view but only a portion of 0.4 mm �.4 mm centered at the corresponding optical axis.

ig. 8. Enlarged view of the central elemental image in Fig. 7.ote that, strictly speaking, the light intensity of these images

hould be half of that of the images in Fig. 5. This affect does not

ppear in the figures because they are normalized differently.
Page 6: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

blpc

5

IrftCg

Fn

Fbti

inary amplitude modulation improves lateral reso-ution in a very large range of depth positions butroduces a slight worsening for low values of depth

ig. 9. Cross sections of function H0�x�, z� corresponding to theonmodulated lenses and the amplitude-modulated lenses.

oordinate z.

. Polychromatic-Illumination Case

n this section we apply our formalism to a moreealistic situation in which we consider that the sur-ace object is illuminated by a white-light source. Inhis case the intensity distribution captured by theCD corresponding to the central elemental image isiven by

I0�x�� ��

V���I�0�x��d�, (11)

ig. 10. Enlarged view of the central elemental image obtainedy taking into account the polychromatic nature of the illumina-ion: Top, image obtained with the nonmodulated lens; bottom,mage obtained with the amplitude-modulated lens.

�0

1 November 2004 � Vol. 43, No. 31 � APPLIED OPTICS 5811

Page 7: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

waC

w

EadpzpFsitbt

paNttpfi

psFtatdTNcTrvqbt

6

IsttahcPsnocdi

kIyc

Fms

5

here I�0�x�� was defined in Eq. �8� and function V���

ccounts for the normalized spectral sensitivity of theCD. Note that Eq. �11� can be rewritten as

I0�x�� � � R�x�H0�x� � Mzx; z�d2x, (12)

here

H0�x�; z� ��

V���H�0�x�; z�d�. (13)

ig. 11. Reconstructed image obtained from the simulated ele-ental images obtained with, top, the nonmodulated pickup len-

let array and, bottom, the binary-modulated pickup lenslet array.

�0 �

812 APPLIED OPTICS � Vol. 43, No. 31 � 1 November 2004

quation �13� represents the response of the CCD towhite-light point source that is axially displaced a

istance z from the reference plane. In Fig. 9 welotted some different cross sections of function H0�x�;� corresponding to the two cases being studied. Thearameters for the calculation were the same as inigs. 2 and 6, and we have used as function V��� thepectral sensitivity provided by Apogee, correspond-ng to the CCD model AP1E. It is clear that also inhe case of polychromatic illumination the use of theinary obscuration produces a significant increase inhe tolerance of the system to defocus.

In Fig. 10 we performed the same numerical ex-eriment as in Figs. 5 and 8 but now taking intoccount the polychromatic nature of the illumination.ote that now the images are slightly more blurred

han in the monochromatic case, but it is still clearhat the amplitude modulation of the microlensesroduces a significant enlargement of the depth ofeld of the integral-imaging system.To finish with our numerical integral-imaging ex-

eriment, we calculated the reconstructed image byimulation with the simulated elemental images.or this numerical reconstruction we consideredhat the integral image was observed by an eye withpupil diameter of �E � 3 mm, which was placed on

he optical axis of the central microlens and at aistance of l � 300 mm from the reference plane.he reconstructed images are shown in Fig. 11.ote that we used for the reconstruction, in the two

ases being studied, a nonmodulated lenslet array.his is because the use of binary obscurations in theeconstruction setup could produce an importantignetting effect in the image. It is clear that theuality of the reconstructed integral image is muchetter in the case of the amplitude-modulated sys-em.

. Conclusions

n summary we have presented a new technique forimultaneously improving both the depth of field andhe spatial resolution of the 3D integral-imaging sys-ems. The technique is based on modulating themplitude transmittance of the microlenses. Weave shown that, by simply properly blocking theentral part of the microlenses, one can increase theDRS of an integral-imaging system by 370%. Be-ides, owing to the simplicity of the proposed tech-ique, which does not imply significant modificationf the integral-imaging architecture, this techniqueould be combined with other techniques19–21 to pro-uce further improvements in the performance ofntegral-imaging systems.

M. Martınez-Corral and G. Saavedra gratefully ac-nowledge financial support from Plan Nacional�D�I �grant DPI2003-4698�, Ministerio de CienciaTecnologıa, Spain. They also acknowledge finan-

ial support from the Generalitat Valenciana, Spain

grant grupos03�227�.
Page 8: Integral Imaging with Improved Depth of Field by Use of Amplitude-Modulated Microlens Arrays

R

1

1

1

1

1

1

1

1

1

1

2

2

2

2

2

2

2

eferences1. M. G. Lippmann, “Epreuves reversibles donnant la sensation

du relief,” J. Phys. �Paris� 7, 821–825 �1908�.2. H. E. Ives, “Optical properties of a Lippmann lenticulated

sheet,” J. Opt. Soc. Am. 21, 171–176 �1931�.3. N. A. Valyus, Stereoscopy �Focal, London, 1966�.4. R. L. de Montebello, “Wide angle integral-photography: the

integram technique,” in Three-Dimensional Imaging, S. A.Benton, ed., Proc. SPIE 120, 73–91 �1970�.

5. T. Okoshi, Three Dimensional Imaging Techniques �Academic,London, 1976�.

6. Y. A. Dudnikov, B. K. Rozhkov, and E. N. Antipova, “Obtaininga portrait of a person by the integral photography method,”Sov. J. Opt. Technol. 47, 562–563 �1980�.

7. T. Motoki, H. Isono, and I. Yuyama, “Present status of three-dimensional television research,” Proc. IEEE 83, 1009–1021�1995�.

8. M. McCormick and N. Davies, “Full natural colour 3D opticalmodels by integral imaging,” in Proceedings of Fourth Inter-national Conference on Holographic Systems, Components,and Applications, �Institute of Electrical Engineers, London,1993�, pp. 237–242.

9. H. Arimoto and B. Javidi, “Integral three-dimensional imagingwith digital reconstruction,” Opt. Lett. 26, 157–159 �2001�.

0. F. Okano, H. Hoshino, J. Arai, and I. Yayuma, “Real timepickup method for a three-dimensional image based on inte-gral photography,” Appl. Opt. 36, 1598–1603 �1997�.

1. H. Hoshino, F. Okano, H. Isono, and I. Yuyama, “Analysis ofresolution limitation of integral photography,” J. Opt. Soc. Am.A 15, 2059–2065 �1998�.

2. T. Naemura, T. Yoshida, and H. Harashima, “3-D computergraphics based on integral photography,” Opt. Express 8, 255–262 �2001�, http:��www.opticsexpress.org.

3. J.-H. Park, S.-W. Min, S. Jung, and B. Lee, “Analysis of view-ing parameters for two display methods based on integralphotography,” Appl. Opt. 40, 5217–5232 �2001�.

4. L. Erdman and K. J. Gabriel, “High resolution digital photog-raphy by use of a scanning microlens array,” Appl. Opt. 40,

5592–5599 �2001�.

5. J.-S. Jang and B. Javidi, “Improved viewing resolution ofthree-dimensional integral imaging by use of nonstationarymicro-optics,” Opt. Lett. 27, 324–326 �2002�.

6. J.-S. Jang and B. Javidi, “Three-dimensional synthetic aper-ture integral imaging,” Opt. Lett. 27, 1144–1146 �2002�.

7. J. Arai, H. Hoshino, M. Okui, and F. Okano, “Effects on theresolution characteristics of integral photography,” J. Opt. Soc.Am. 20, 996–1004 �2003�.

8. A. Stern and B. Javidi, “3-D computational synthetic apertureintegral imaging �COMPSAII�,” Opt. Express 11, 2446–2451�2003�, http:��www.opticsexpress.org.

9. J.-S. Jang, F. Jin, and B. Javidi, “Three-dimensional integralimaging with large depth of focus by use of real and virtualimage fields,” Opt. Lett. 28, 1421–1423 �2003�.

0. B. Lee, S.-W. Min, S. Jung, and J.-H. Park, “Computer-generated dynamic three-dimensional display using integralphotography adopting Fresnel lenses,” in Algorithms and Sys-tems for Optical Information Processing V. B. Javidi and D.Psaltis, eds., Proc. SPIE 4471, 9–17 �2001�.

1. J.-S. Jang and B. Javidi, “Large depth-of-focus time-multiplexed three-dimensional integral imaging by use of lens-lets with nonuniform focal lengths and aperture sizes,” Opt.Lett. 28, 1924–1926 �2003�.

2. A. E. Siegman, Lasers �University Science, Saulito, Calif.,1986�.

3. A. Stokseth, “Properties of a defocused optical system,” J. Opt.Soc. Am. 59, 1314–1321 �1969�.

4. C. J. Zapata-Rodrıguez, P. Andres, M. Martınez-Corral, and L.Munoz-Escriva, “Gaussian imaging transformation for theparaxial Debye formulation of the focal region in a low-Fresnel-number optical system,” J. Opt. Soc. Am. A 17, 1185–1191 �2000�.

5. C. J. R. Sheppard, “Binary optics and confocal imaging,” Opt.Lett. 24, 505–506 �1999�.

6. M. Martınez-Corral, C. Ibanez-Lopez, G. Saavedra, and M. T.Caballero, “Axial gain resolution in optical sectioning fluores-cence microscopy by shaded-ring filters,” Opt. Express 11,

1740–1745 �2003�, http:��www.opticsexpress.org.

1 November 2004 � Vol. 43, No. 31 � APPLIED OPTICS 5813