Design and implementation of a 3-D mapping system for...

12
Design and implementation of a 3-D mapping system for highly irregular shaped objects with application to semiconductor manufacturing Vivek A. Sujan Steven Dubowsky Massachusetts Institute of Technology Department of Mechanical Engineering Cambridge, Massachusetts 02139 Abstract. The basic technology for a robotic system is developed to automate the packing of polycrystalline silicon nuggets into fragile fused silica crucible in Czochralski (melt pulling) semiconductor wafer produc- tion. The highly irregular shapes of the nuggets and the packing con- straints make this a difficult and challenging task. It requires the delicate manipulation and packing of highly irregular polycrystalline silicon nug- gets into a fragile fused silica crucible. For this application, a dual optical 3-D surface mapping system that uses active laser triangulation has been developed and successfully tested. One part of the system mea- sures the geometry profile of a nugget being packed and the other the profile of the nuggets already in the crucible. A resolution of 1 mm with 15-KHz sampling frequency is achieved. Data from the system are used by the packing algorithm, which determines optimal nugget placement. The key contribution is to describe the design and implementation of an efficient and robust 3-D imaging system to map highly irregular shaped objects using conventional components in context of real commercial manufacturing processes. © 2002 Society of Photo-Optical Instrumentation Engi- neers. [DOI: 10.1117/1.1474438] Subject terms: 3-D surface mapping; laser triangulation; silicon nuggets; semi- conductor manufacturing. Paper 010142 received Apr. 23, 2001; revised manuscript received Dec. 28, 2001; accepted for publication Dec. 31, 2001. 1 Introduction The requirements for growing large single device-grade semiconductor crystals are very stringent. Extraordinarily low impurity levels, on the order of 1 part in 10 billion, require careful handling and treatment of the material at each step of the manufacturing process. During the Czo- chralski ~melt pulling! semiconductor wafer production process ~also known as the CZ process!, highly irregular shaped polycrystalline silicon nuggets ~Fig. 1! are packed ~charged! into large fused quartz crucibles. 1,2 The nuggets range in weight from a few grams to about 600 grams. Avoiding contamination, protecting the crucible from dam- age, and following complex packing density rules are key constraints during the process. 2 Once packed, these cru- cibles are placed in ovens, in which the CZ melt pulling process occurs. The extruded semiconductor ingots are then sliced into wafers from which semiconductor chips are etched. Currently 45.7-cm-diam crucibles, which are packed manually, are being replaced by much larger ~more than 91.5 cm diam! crucibles, for use in the upcoming fab- rication of a new generation of 300-mm-diam wafers. 2,3 In these crucibles, manual packing is neither ergonomic nor practical. Automation has the potential benefit of reducing cost, achieving greater packing consistency, and reducing packing time. Previous studies have shown that because each nugget has a unique size and shape, and because of strict packing rules, fixed automation was not feasible. 1,2,4 The large variance in size and weight, the irregular shape of the nuggets ~see Fig. 1!, and the strict process requirements result in four key technical challenges to au- tomate the crucible packing process with a robotic system. First, the nuggets are difficult to grasp. Second, nuggets must be placed in accordance with their geometry and the Fig. 1 Typical polycrystalline silicon nuggets: (a) top and (b) side. 1406 Opt. Eng. 41(6) 14061417 (June 2002) 0091-3286/2002/$15.00 © 2002 Society of Photo-Optical Instrumentation Engineers

Transcript of Design and implementation of a 3-D mapping system for...

Design and implementation of a 3-D mappingsystem for highly irregular shaped objects withapplication to semiconductor manufacturing

Vivek A. SujanSteven DubowskyMassachusetts Institute of TechnologyDepartment of Mechanical EngineeringCambridge, Massachusetts 02139

Abstract. The basic technology for a robotic system is developed toautomate the packing of polycrystalline silicon nuggets into fragile fusedsilica crucible in Czochralski (melt pulling) semiconductor wafer produc-tion. The highly irregular shapes of the nuggets and the packing con-straints make this a difficult and challenging task. It requires the delicatemanipulation and packing of highly irregular polycrystalline silicon nug-gets into a fragile fused silica crucible. For this application, a dual optical3-D surface mapping system that uses active laser triangulation hasbeen developed and successfully tested. One part of the system mea-sures the geometry profile of a nugget being packed and the other theprofile of the nuggets already in the crucible. A resolution of 1 mm with15-KHz sampling frequency is achieved. Data from the system are usedby the packing algorithm, which determines optimal nugget placement.The key contribution is to describe the design and implementation of anefficient and robust 3-D imaging system to map highly irregular shapedobjects using conventional components in context of real commercialmanufacturing processes. © 2002 Society of Photo-Optical Instrumentation Engi-neers. [DOI: 10.1117/1.1474438]

Subject terms: 3-D surface mapping; laser triangulation; silicon nuggets; semi-conductor manufacturing.

Paper 010142 received Apr. 23, 2001; revised manuscript received Dec. 28,2001; accepted for publication Dec. 31, 2001.

derily,

l atzo

n

s.-

ey-

ngtherere

-

noringcingusse

lar

au-m.etsthe

1 Introduction

The requirements for growing large single device-grasemiconductor crystals are very stringent. Extraordinalow impurity levels, on the order of 1 part in 10 billionrequire careful handling and treatment of the materiaeach step of the manufacturing process. During the Cchralski ~melt pulling! semiconductor wafer productioprocess~also known as the CZ process!, highly irregularshaped polycrystalline silicon nuggets~Fig. 1! are packed~charged! into large fused quartz crucibles.1,2 The nuggetsrange in weight from a few grams to about 600 gramAvoiding contamination, protecting the crucible from damage, and following complex packing density rules are kconstraints during the process.2 Once packed, these crucibles are placed in ovens, in which the CZ melt pulliprocess occurs. The extruded semiconductor ingots aresliced into wafers from which semiconductor chips aetched. Currently 45.7-cm-diam crucibles, which apacked manually, are being replaced by much larger~morethan 91.5 cm diam! crucibles, for use in the upcoming fabrication of a new generation of 300-mm-diam wafers.2,3 Inthese crucibles, manual packing is neither ergonomicpractical. Automation has the potential benefit of reduccost, achieving greater packing consistency, and redupacking time. Previous studies have shown that becaeach nugget has a unique size and shape, and becaustrict packing rules, fixed automation was not feasible.1,2,4

1406 Opt. Eng. 41(6) 1406–1417 (June 2002) 0091-3286/2002/$15

-

n

eof

The large variance in size and weight, the irregushape of the nuggets~see Fig. 1!, and the strict processrequirements result in four key technical challenges totomate the crucible packing process with a robotic systeFirst, the nuggets are difficult to grasp. Second, nuggmust be placed in accordance with their geometry and

Fig. 1 Typical polycrystalline silicon nuggets: (a) top and (b) side.

.00 © 2002 Society of Photo-Optical Instrumentation Engineers

Sujan and Dubowsky: Design and implementation . . .

Fig. 2 Typical crucible charging: (a) charging constraints and (b) charging process.

ca-t bebeuste-

ca-ancntsbeallheytheisdi-ru-theseerser,sistts.ly

thon-

atorfaceishmthevi-k-n

ort-

deoreave

nds aakeion

proper set of packing rules within the process specifitions. Hence each nugget and crucible surface musscanned rapidly and accurately to get surface profilesfore a nugget is placed. Third, the planning system muse this profile information to determine the optimal placment of a nugget in a crucible. Determining the best lotion to place each nugget is necessary due to the importof the packing density and complex process constraiFinally, the irregularly shaped and stiff nuggets mustcarefully placed against the fragile quartz crucible wwithout scratching and contaminating the process. Tmust also be carefully placed against other nuggets inprocess without disturbing the existing structure. Thisusually accomplished by handling the larger nuggets invidually. Figure 2 shows the schematic of a packed ccible. The packing rules are set by the technology andstringent traditional CZ crystal production rules. Thepacking rules require that the nuggets be placed in layPacking is initiated with a bed layer formed by smallgravel-sized nuggets. The following nugget layers conof larger wall nuggets and internal smaller bulk nuggeAlternate filling of the bulk and wall nuggets eventual

-

e.

.

results in a full crucible. Finally, packing is completed wia crown of larger nuggets. Figure 3 shows the system ccept developed in this study.1,2 During the stratified pack-ing, nuggets are acquired one at a time by the manipuland passed over the nugget scanner to obtain its surprofile. Simultaneously, the crucible surface profilemapped by an overhead vision system. A packing algoritapplies this vision data to compute an ideal position fornugget. The system timing is critical to the economicability of the system. To stay competitive with human pacing, individual nuggets forming the wall and the crowneed to be placed one every 10 sec.2,3 In such a system, theoperator will only be responsible for opening bags and sing nuggets.

A key technical component is a vision system to provithe 3-D surface geometries of the individual nuggets befthey are placed in the crucible and of the nuggets that halready been packed.4 Machine vision providing 3-D sur-face geometry information has been widely studied aapplied.5–13 The CZ process contains constraints, such acomplex environment, cost, and compactness, which mthe practical design of 3-D surface geometry acquisit

1407Optical Engineering, Vol. 41 No. 6, June 2002

ingtheOne oualon,enCZ

ersingori-aretematater-

ofryst,

us-re-

im-sysas

rdiuseb-in-car-

acein-

s,ain.

arelo-

dis-

ds.indortial-ail-reeenere-jectand

isi-iont,nd-

se

inoft

thethehe

toh-f-the

Sujan and Dubowsky: Design and implementation . . .

very challenging. The design of a 3-D vision system usactive laser triangulation with CCD cameras to meetstringent requirements of the CZ process is presented.system element is for surface mapping of the landscapnuggets in the crucible, and the second for the individnugget mapping. We outline the design, implementatiand calibration of the systems. We also present experimtal results that show it can meet the requirements ofprocess automation.

2 Vision System Design

2.1 System Requirements

The vision system of the crucible packing system gaththe environmental data that are required by the packalgorithm. After being grasped, the nugget shape andentation, and the internal surface profile of the crucible,measured and processed by the computer vision sysThe packing algorithm utilizes the three-dimensional dor a range map of the profiles for the nugget and the innal crucible surface, to determine a satisfactory locationthe nugget within the current crucible state. The primaconsiderations for an industrial vision system are cospeed, accuracy, and reliability. To pay for itself, an indtrial vision system must outperform the human labor itplaces in all of these categories.11,13,14 In most cases, thevision system must perform a real-time analysis of theage to compete with humans. Table 1 summarizes thetem requirements for the crucible charging vision systemdetermined by the factory and charge requirements.1,2

2.2 Range Sensing

Range-imaging sensors collect three-dimensional coonate data from visible surfaces in a scene, and can bein a wide variety of automation applications, including oject shape acquisition, bin picking, robotic assembly,spection, gauging, mobile robot navigation, automatedtography, medical diagnosis~biostereometrics!, etc.11,14–17

The image data points explicitly represent scene surfgeometry as sampled points. The inherent problems ofterpreting 3-D structure in other types of imagery~where3-D data is obtained based on 2-D cues! are not encoun-

Fig. 3 Crucible charging system layout.

1408 Optical Engineering, Vol. 41 No. 6, June 2002

ef

-

.

-

-d

tered in range imagery, although most low level problemsuch as filtering, segmentation, and edge detection, remMost optical techniques for obtaining range imagesbased on one of the five principles: triangulation, hographic interferometry~phase shift measurement!, radar~time of flight!, lens focus, and moire´ techniques.6,7,15,18Adetailed review in which several of these methods arecussed and compared has been published.7,10–12 Table 2lists the main comparison features of these five methoAll these methods suffer some limitations, such as blregions, computational complexity, highly texturedstructured scenes, surface orientation, and/or sparesolution.10,15 Active triangulation eliminates many problems provided that an intense enough light source is avable. It can capture the third dimension through model-frange measurement. This can be very useful in 3-D scanalysis. It can resolve many of the ambiguities of interptation arising from lack of correspondence between obboundaries and inhomogeneities of intensity, texture,color found in other methods.6,7,10,12,14

2.3 Active Triangulation

One of the most commonly seen methods for the acqution of three-dimensional data is the laser triangulatmethod.7–9,15,19,20A structured light source, such as poinline, or color coding, is used to illuminate the object aeither one or more cameras~possible for stereoscopic vision systems! detect the reflected light~see Fig. 4!. Thelocation and orientation of the cameras~determined pre-cisely with calibration! yields an equation that determinethe 3-D location of the illuminated point. By scanning thentire object, a 3-D map of the object can be acquired.

2.3.1 Model 1: Point-wise triangulation using twostereoscopic cameras

Figure 5 is an orthographic view of the system. The origof the world coordinate system is set at the front nodethe lens of camera 1. TheX axis passes through the fronnode of the lens of camera 2. TheZ axis is positive in thedirection of the laser beam, which originates betweentwo cameras. In this design, the position/orientation oflaser is not critical but provides for a simple solution to tstereo matching problem.

The v axis of the detector is assumed to be parallelthe world Y axis ~though this requirement can be matematically relaxed!. Optimization of the combined field oview requires thatu252u1545 deg. The distances between back nodes of the lenses and the detectors areprincipal focal distancesf 1 and f 2. The 2-D coordinates

Table 1 Vision system requirements.

1. High Accuracy Approximately 1 mm

2. Fast—for automation Nugget mapping ;2 to 3 s

Crucible surface mapping ;4 to 5 s

3. Obtrusiveness No interference with robotic system

Minimum change in factory ambientlighting

4. System Interfacing Suitable data representation for packing

5. Costs Factory system vision costs <$20,000

Sujan and Dubowsky: Design and implementation . . .

Table 2 Qualitative comparisons of range sensing methodologies.

Active/passivetriangulation

Holographicinterferometry

Radar: TOF,AM, FM Lens focus

Moire Techniques:projection, shadow,single frame 1 ref,

multiple frame

Resolution/ .2.5 mm .3 mm (0.4 nm @100 mm .1 mm .11 mm

accuracy (with specific theoretical)

hardware)

Data ,10-M pixels/s ,1-K points/s !100-K pixels/s ,60-K ,100-K pixels/s

acquisition pixels/s

rate

Depth of field O(mm)→ O(mm)→O(100 @O(mm) O(100 mm) .O(100 mm)

O(10 meters) meters) →O(meter)

Limitations Detector noise, alignment, system High res. 1 data lens quality/ only for smooth surfaces

data processing noise acq. rate⇒small positioning/

power depth of field measuring high resolution

⇒small depth of view

ans

me

ew.

measured on the image planes are (u1 ,v1) and (u2 ,v2) inthe detectors’ own coordinate systems. These can be trformed into vectorsR1 and R2 in the world coordinatesystem with:

RW 15M1RW 18 and RW 25M2RW 28 , ~1!

where

M15F cosu1 0 2sin u1

0 1 0

sin u1 0 cosu1

Gand

-M25F cosu2 0 2sin u2

0 1 0

sin u2 0 cosu2

G . ~2!

And R18 andR28 are in the detector coordinate systems:

RW 185H u1

v1

f 1

J and RW 285H u2

v2

f 2

J . ~3!

In theory, extensions ofR1 and R2 should intersect eachother because they are directions of light rays that cofrom the same light source located atp ~Fig. 5!. In practice,any misalignment or error in the system can result in skThe vectore is defined to be the skew vector such thatueu is

Fig. 4 General triangulation layout.

1409Optical Engineering, Vol. 41 No. 6, June 2002

of a

theaptain

al

ral-

tes

m

i-alfer-ate

in

Sujan and Dubowsky: Design and implementation . . .

the shortest distance between the two lines. To finde, wedefine:

e~S1 ,S2!5S1R12S2R22Rd . ~4!

Minimizing $e(S1 ,S2). e(S1 ,S2)% with respect toS1 andS2 ~by taking partial derivatives! gives:

S15~RW 1 ,RW d!~RW 2 ,RW 2!2~RW 2 ,RW d!~RW 1 ,RW 2!

~RW 1 ,RW 1!~RW 2 ,RW 2!2~RW 1 ,RW 2!~RW 1 ,RW 2!

and

S25~RW 2 ,RW d!~RW 1 ,RW 1!2~RW 1 ,RW d!~RW 1 ,RW 2!

~RW 1 ,RW 2!~RW 1 ,RW 2!2~RW 2 ,RW 2!~RW 1 ,RW 1!, ~5!

e can thus be calculated. The coordinates of the spotp aredefined to be at the center ofe, giving:

p51/2~S1R11S2R21Rd!. ~6!

Since the system can determine the 3-D coordinatespoint anywhere in the combined field of view~FOV!, andmeasure points in any order, it is possible to analyzedata stream from the system during acquisition and adtively scan the light spot in response, allowing for daoversampling and the collection of spatially dense dataregions of interest.15 The resolution, based on the nominviewing volume, is found to be:

RW d/2

Detector resolution~7!

2.3.2 Model 2: Point-wise triangulation using onecamera

The basic design is shown in Fig. 6.20 A beam of lightoriginates from positiond along theX axis, projects at an

Fig. 5 Orthographic view triangulation using two stereoscopic cam-eras.

1410 Optical Engineering, Vol. 41 No. 6, June 2002

-

angleu0, and defines a reference point (d/2,l ) that is usedfor calibration. At the origin, a lens of focal lengthf, fo-cuses the reflected light on a position sensor aligned palel to the X axis and in focus at2 f l /(l 2 f ) along theZ axis. It is assumed that the values ofd, l, andf are known.Under rotation of an optical scanner, the light beam rotato another angular positionu01u ~u is negative in Fig. 6!.The spot of light on the position sensor moves frod f /(2l 22 f ) to location p due to the intersection of theprojected light beam with the object surface atx,z. Therelationship between the coordinates~x,z! and the param-eters of the geometry is:

x5d•pFp1f • l ~2l •tanu1d!

~ l 2 f !~d•tanu22l !G21

and

z52dFp~ l 2 f !

f • l1

2l •tanu1d

d•tanu22l G21

. ~8!

Equation~8! relies on accurate knowledge of the postion of the reference point. Additionally, due to the opticnonlinearities of lenses, selection of an appropriate reence point can be difficult. An alternative, more accurform of active triangulation using one camera is seenFig. 7. A single camera is aligned along aZ axis with the

Fig. 6 Triangulation using one camera and reference point.

Fig. 7 Triangulation using one camera and laser pose.

ine

ble

ixe

e

c-e

ned

enis

ispu-

ro-

twoug-ug-

Thergerareifynsap-

ru-g. 9balndion

Sujan and Dubowsky: Design and implementation . . .

center of the front node of the lens located at~0,0,0!, givingthe origin of the camera coordinate frame. At a baseldistanceb to the left of the camera~along the negativeX axis! is a laser that projects a plane of light at a variaanglea relative to theX-axis baseline. The point (x,y,z) inthe scene is projected onto the digitized image at the p(u,v)~see Fig. 7!. The measured quantities (u,v) are usedto compute the 3-D coordinates~x,y,z! of the illuminatedscene point:

@x y z#5b

f •cot~a!2u@u v f #. ~9!

For any given focal lengthf and baseline distanceb, theresolution of this triangulation system is only limited by thability to accurately measure the anglea and the imageplane coordinates (u,v). The X and Y system resolution(dX,dY) is given by:

R

Detector resolution, ~10!

whereR is the width of the projected image onto the detetor. The depth resolutiondZ, given an incident laser angla, image widthW, andn-pixel detector resolution, is givenby:

dZ5~W tan a!/n. ~11!

The value of the resolution (dX,dY) may differ for the twoimage plane coordinates (u,v), as the detector resolutiomay not be identical for the two axes. Due to the addsimplicity and larger field of view, this setup has beimplemented on the laboratory prototype system, anddescribed in the following sections.

2.4 System Layout—Crucible Surface Mapping

The layout for the crucible surface mapping systemshown in Fig. 8. The crucible is scanned while the mani

Fig. 8 Crucible mapping vision system layout.

l

lator is out of the FOV. Scanning a surface involves pjecting and moving laser light in the form a line~rather thana point! across the surface. The scanning consists ofphases: a low-resolution scan that is updated every 10 nget placement cycles, and a high-resolution scan every nget placement cycle in the area that was manipulated.high-resolution scans are patched together to give a lahigh-resolution map of the entire crucible. These mapsused to routinely monitor the crucible surface to identany gross motions of nuggets falling out of place. Regioin the low-res map that do not match the high-res m~within a given tolerance! are remapped with a high resolution.

Based on the requirements outlined in Sec. 2.1, the ccible surface mapping system parameters outlined in Fiand Table 3 were developed. A crude/low-resolution gloscan gives anXY resolution of 1.5 cm and takes one secoto process for an 45.7-cm-diam crucible. Further resolut

Fig. 9 Crucible surface mapping design parameters.

Table 3 Crucible surface mapping design parameters.

Parameters Lab Design

h 54 inches

h8 14 inches1crown

w 18 inches

b 36 inches

g 9.5 degrees

a1 50.3 degrees

a2 63.4 degrees

Camera resolution (pixels) 5003500

Z resolution at A 1.1 mm

Z resolution at B 2.4 mm

X,Y resolution at A 0.9 mm

X,Y resolution at B 1.2 mm

Cost $10,700

1411Optical Engineering, Vol. 41 No. 6, June 2002

. Aetorat

meew

nggeoneis

els,s-

galionx-esnsadn-d-of

m isthe

thethea-a-

ityon

tem

ageheti-

on-ofithartsoreed,ni-oret

at-canusttheldn-ede

col-thendug-ure-es

Sujan and Dubowsky: Design and implementation . . .

improvement is achieved by allowing longer scan timeshigher resolution local scan~15-cm-wide band across thcrucible! gives XY resolution of 1 mm and takes 4.5 sprocess. All times are based on a standard video frameof 30 frames/s. These times are comparable to the tirequired for the manipulator actions of acquiring a nnugget and bringing it to the crucible of;5.5 s.12 Addition-ally, to avoid significant changes to the ambient lighticonditions, extracting the reflected laser light in the imarequires subtracting an image with the laser on andwith the laser off. Laser intensity for such an operationdependent on camera sensitivity, ambient lighting levnugget reflectivity, and proximity. For the laboratory sytem, a 5-mW laser was sufficient.

The crucible optical scanner uses a moving-magnetvanometer. A moving-magnet motor has no saturattorque limit and very little electrical inductance. Thus etremely high torques can be generated very rapidly, ansential feature for systems requiring short step respotimes. The scanner controller is tuned for the inertial loof its mirror. A capacitive position detector within the scaner provides a differential current position control feeback. A latching digital circuit gives a maximum 16 bitsscanner control, equivalent to 10-mrad resolution.

2.5 System Layout—Nugget Surface Mapping

The prototype laboratory nugget surface mapping systeshown in Fig. 10. The manipulator grasps and carriesnugget over the vision system before it is placed incrucible. It is scanned in real time as it moves acrosscamera. This requires careful synchronization of the mnipulator motion and the nugget mapping system. The mnipulator is required to maintain a constant linear velocof the nugget across the camera FOV of 3 cm/s based1-mm XY resolution, nominal nugget size of 7.537.5 cm,and 30 frames/s video rate. The nugget mapping sysparameters are summarized in Fig. 11 and Table 4.

Fig. 10 Nugget surface mapping system layout.

1412 Optical Engineering, Vol. 41 No. 6, June 2002

es

-

-e

a

The process of nugget selection from the mapped imrequires two primary considerations: 1. differentiating tnugget from the parts of the manipulator, and 2. differenating the nugget from any overhanging gripper parts. Csideration 1. is resolved by setting the lowest visible partthe manipulator at a predefined reference height, wwhich data comparisons can be made and manipulator pcan be eliminated. Resolving consideration 2. can be minvolved. The gripper is composed of three closely spacFDA grade vinyl B3-1, vacuum cups mounted on a mafold plate.1,2 The manifold plate mounts to the end effectof the 7 DOF manipulator. Suction forces hold the nuggin place. The deformable shape of the gripper parts~suctioncups! excludes feature extraction techniques. Differentiing the gripper based on reflection intensity propertiesbe misleading, as the cups get coated with a silicon dlayer during the charging process. The assumption thatgripper will always be occluded by the nugget in the fieof view of the nugget mapping system is not a valid geeralization, but true for a large fraction of the graspnuggets.2 The most reliable solution is to differentiate thgripper parts based on color mapping. The contrastingors of the suction cups and the silicon nuggets, andpresence of ambient lighting make this option simple arobust to implement on the robotic system. Thus, the nget mapping system with a color camera ignores measments of reflected light originating from colored surfacnot matching a predefined palette.

Fig. 11 Nugget mapping system design parameters.

Table 4 Nugget surface mapping design parameters.

Parameters Lab/Factory Design

h (distance to nugget) 10 inches

B 1.7 inches

g 12 degrees

a 84 degrees

Camera resolution in pixels 100031000

Z resolution 1.03 mm

X,Y resolution 0.11 mm

Cost $4,200

ac-ipsge

,inntesnd

omalin-

is-

r

isnts

he

andis

sucthede-exthis

ctsthein

ngtrue

ed

ism-upra-

de-r-

l-to13

Sujan and Dubowsky: Design and implementation . . .

3 Calibration

3.1 Intrinsic Camera Calibration

A perfect lens and a detector with perfectly linear charteristics would produce simple trigonometric relationshbetween the angle of an incoming light ray and the imaplane coordinates (u,v) of the light source. In practiceimperfections in either of these elements will resultdistortions.21,22 The purpose of camera intrinsic calibratiois to find and map the errors in image plane coordina(u,v) due to the nonlinearities in lenses, detectors, aelectronics.15 These error measurements are used to cpensate the measured values to produce accuracy equthe resolution of the system. Given a known horizontalcident angleg and vertical incident angleb between thelight ray and the principal axis, and a known principal dtancef, the ‘‘expected’’ image plane coordinate (u,v) canbe calculated~see Fig. 12!. Subtracting these from theiactual values gives the error map value.

uexpected5 f •tan g, vexpected5 f •tan b ~12!

Eu5uexpected2umeasured5 f •tan g2umeasured ~13!

Eu5vexpected2vmeasured5 f •tan b2vmeasured. ~14!

This scheme assumes thatEu at g50 andEv at b50 are0. For nonideal lenses, the focal lengthf would have to bemapped as an average value given by15:

f ' f 51

n (n

Uumeasured

tan g U. ~15!

Using binary interpolation, anEu andEv can be looked upfor every (u,v) pair, and compensation made. This errorgenerally important when system accuracy requiremeare at submillimeter levels.

The hardware setup for this process is as follows. Tcamera lens system is mounted on a two-axis~pitch andyaw! goniometer. To square up the axes of the detectorgoniometer, a high precision displacement indicatorplaced against the lens mount and the camera rotatedthat the detector experiences roll and yaw. By varyingangles and monitoring the displacement indicator, thetector axes can be aligned with the goniometer axes. Nthe zero position of the lens detector has to be found. T

Fig. 12 Relationship between errors and angles.

-to

h

,

is the location where the principal axis of the lens intersethe detector. Again, with the camera oriented such thatdetector experiences roll and yaw, an LED is placedfront of the camera. By rolling the detector and mappithe led trace, a circle center can be defined. This is thezero, from which Eqs.~12!, ~13!, and ~14! are applied.Next, the camera is rotated~yaw! by 90 °, so that the de-tector can experience pitch and yaw. The LED is positionsuch that at the camera home position~pitch and yaw angleof 0 °!, an image is formed at the camera zero. Thisachieved by manual adjustments of the LED until the iage is formed at the desired location. Finally, by settingthe appropriate range of pitch and yaw angles, the calibtion continues with the aid of Eqs.~12!, ~13!, and~14!.

3.2 Extrinsic Camera Calibration

The extrinsic variables of the system that have to be pretermined are the focal lengthf and the intercamera lasedistanceb. Given an unknown lens focal length, intercamera laser distanceb, and an unknown mounting orthogonaity with respect to the ground, it is desired to be ablesolve for the unknowns. The geometry shown in Fig.yields six independent equations in six unknowns:

l 5g

sin a~16!

b5z•cot a ~17!

b5b82 l •cosa ~18!

z21b25r 2 ~19!

sin~b2a!

]h5

sin~p/22b!

r 1 l~20!

sin~p2b!

l5

sin~b2a!

x, ~21!

Fig. 13 Extrinsic calibration geometry for mapping system.

1413Optical Engineering, Vol. 41 No. 6, June 2002

in-alue

din

apanhemheonhisaptorngong,pinula-ineen

wnpo-

netua-

hee-ing

ispingtemme.ate

ce

ateof

.ri-ns-is

Sujan and Dubowsky: Design and implementation . . .

whereg ~measurable!, a, b ~predefined scan angles!, anddh ~object height! are known. The variablex is defined asthe distance between the intersection points of the twocident rays, at anglesa and b, with the extended camerlens front nodal plane. Tables 5 and 6 summarize the vaobtained for the extrinsic parameters after solving Eqs.~16!through ~21! for both the crucible surface mapping annugget mapping systems. In this it is assumed that thecident angles can be found with the required accuracy.

3.3 Timing Calibration

As described in Sec. 2.5, timing between the nugget mping system and the manipulator motion is critical. Atideal frame rate of 30 Hz and a resolution of 1 mm, tmanipulator is required to travel at 3 cm/s with a maximuallowable error of 0.5 mm/s for a scan time of 2.5 s. Tprimary source for errors in nugget position estimatiarises in uncertainties in the video processing rates. Terror can be resolved in one of three ways: 1. nugget mping system manipulator coupling, where the manipulais driven by interrupts provided by the nugget mappisystem, reducing the problem to a robot position based ctrol scheme; 2. off-line nugget mapping system timinwhere the average processing rate of the nugget mapsystem is measured off-line and then the known maniptor speed is used to infer nugget position; and 3. on-lnugget mapping system timing, where the time betwe

Table 5 Laboratory nugget mapping system extrinsic calibrated pa-rameter values.

Optical image size (diameter) 6.477 mm

Half angle of view cone 20.6 deg

Effective focal length 8.616 mm

s-

eeni-

can

1414 Optical Engineering, Vol. 41 No. 6, June 2002

s

-

-

-

-

g

individual frames is measured during scan and the knomanipulator speed is used to obtain the correct nuggetsition. Due to its inherent simplicity~in computation andimplementation! and accuracy, in this research the on-linugget mapping system timing is used. Hence, any fluctions in the frame rate or error ina priori knowledge of theframe rate is accounted for.

3.4 Calibration of Coordinate Frames

Finally, since all nugget manipulation is performed in tmanipulator inertial coordinate frame, calibration is rquired for coordinate transformation between the imagand manipulator systems. The crucible charging systembased on three coordinate frames: 1. the nugget mapsystem coordinate frame; 2. the crucible mapping syscoordinate frame, and 3. the manipulator coordinate fraTo successfully operate this system, the three coordinframes~and images! must be related to a common referenframe, chosen as the manipulator coordinate frame.

Given a mapping system and the manipulator coordinframes, the approach is to simplify the problem from thatsix unknowns@three position~Tx ,Ty ,Tz) and three rota-tions ~a,b,g!# to that of three unknowns in position onlyFor a mapping system in some arbitrary position and oentation with respect to the manipulator, the general traformation matrix between the two coordinate framesgiven by:

Table 6 Laboratory crucible mapping system extrinsic calibratedparameter values.

Effective focal length f 15.8 mm

Intercamera laser distance b 890 mm

Front node to mounting center g 153.03 mm

Object height 101.6 mm

Angle a, b 41.2, 44.5 deg

F @Rxyz#333 Txyz

0W 1 G5F cosb cos a 2sin g sin b cosa2cosg sin a sin g sin a2cosg sin b cosa Tx

cosb sin a cosg cosa2sin g sin b sin a 2sin g cosa2cosg sin b sin a Ty

sin b sin g cosb cosg cosb Tz

0 0 0 1

G , ~22!

ionsheeeon-m.gad-thema-s-n

the

wherea, b, g are the Euler angles for the rotational tranformation between the two frames, andTx ,Ty , andTz arethe coordinates for the translational transformation betwthe two frames. If two points are known in the two coordnate frames, then by applying

F xcamera

ycamera

zcamera

1

G5F @Rxyz# Txyz

0W 1 G•F xrobot

yrobot

zrobot

1

G ~23!

to the two sets of point coordinates, the six unknowns

be solved. In practice, solutions to transcendental equatrequire numerical methods. To simplify the problem, tcrucible mapping system mounting is modified to fit a thrrotational degree of freedom adjustment mechanism, csisting of a rotational table and a two-axis gimbal platforThis allows for roll, pitch, and yaw of the crucible mappinsystem. The crucible mapping system is then manuallyjusted to eliminate rotational misalignments betweentwo coordinate frames. The general coordinate transfortion problem simplifies to that of only translational tranformation. For translational calibration, a single knowpoint on the manipulator end effector is mapped byimaging system. If this point is at a vectorxrobot in themanipulator frame andxcamera in the imaging frame, then

t toby

ns-me.

tosoacthethe

z

30

-thebeentibletestemext,get

thethem-ed.

nug-thiswnonea-

Sujan and Dubowsky: Design and implementation . . .

the crucible mapping frame can be located with respecthe manipulator frame by vector triangulation, givenxcamera2xrobot. The transformation matrix simplifies to:

F @Rxyz# Txyz

0W 1 G5F 1 0 0 xcamera2xrobot

0 0 0 ycamera2yrobot

0 0 1 zcamera2zrobot

0 0 0 1

G . ~24!

A similar method is used to calibrate the coordinate traformation of the nugget mapping system coordinate frawith respect to the manipulator inertial coordinate frame

4 Results

4.1 Crucible Mapping Results

The primary purpose of the crucible mapping system isprovide a map of the nugget field already in the crucible,that the manipulator-packing planner can determine anceptable location for the next nugget. Figure 14 showsraw image of a small nugget field, as might be seen incrucible. Figure 15 shows a plot of thez(x,y) values of thisfield provided by the crucible mapping system. A 15-KH

Fig. 14 Nugget field image.

-

data acquisition rate is obtained, given a video rate offrames/s and CCD resolution of 5003500 pixels on a Pen-tium 166-MHz system. With relatively low cost improvements in both resolution and computational speeds ofimaging hardware, the data acquisition rate can easilysubstantially increased. The quantitative and independprecise measurement of a nugget field to check the crucmapping system accuracy is difficult. However, estimaof the system’s precision indicate that the mapping sysshould meet its required specifications. As discussed nthe quantitative evaluation of the accuracy of the nugmapping system can be practically performed.

4.2 Nugget Mapping Results

A number of tests have been performed to evaluateprecision of the nugget mapping prototype system. Insimplest case, a calibration target consisting of 7 1-mstep profile, machined onto a Delrin block, was scannThe average error measured for those tests is60.34 mmwith s of 0.12 mm~see Fig. 16!. Recall that the accuracyrequirement for the nugget mapping system is61.0 mm.

The system was also evaluated using representativegets. Figure 17 shows one such nugget. The profile ofnugget obtained with the nugget mapping system is shoin Fig. 18. A section of this profile is shown in Fig. 19. Alsshown in Fig. 19 is the error of the profile for this sectiocompared to the profile measured with a coordinate m

Fig. 15 Nugget field mapped profile.

Fig. 16 Calibration target performance: (a) Delrin block and (b) mapped section profile.

1415Optical Engineering, Vol. 41 No. 6, June 2002

p-s-

tor/nd

trolionen

ys-rosatn toaee

ce

d

co-

Hz.em

the-ro-tor/

ore.ud-upedtion

m-er-ys-ggetthem-

Sujan and Dubowsky: Design and implementation . . .

suring machine~CMM!. The average error is60.4 mmwith s of 0.2 mm. The maximum error seen here is aproximately 1 mm, which could be partly an artifact of mimatching of the reference data on the mapped profile.

5 Laboratory System Integration

The laboratory system consists of a robot manipulacontrol system, vision system, packing algorithm, awrist/gripper system~see Fig. 20!. The packing procedureplays a supervisory role in planning and assigning conto the major subsystems. This includes nugget acquisitnugget scanning, crucible surface mapping, placemplanning, nugget placement, and bulk filling.

To provide for accurate scheduling, the governing stem communicates with the major subsystems, either accomputers or across programs. It is recommended thfactory-level system be operated by a central workstatiomaintain simplicity. For intercomputer communication,series of asynchronous handshaking protocols have bdeveloped for communication. These include:

• trigger nugget mapping module for nugget scan

• trigger crucible mapping module for crucible surfaprofiling

• trigger packing algorithm for image extraction anplacement

• transfer nugget position and orientation placementordinates to manipulator for placement.

Fig. 17 A typical Nugget image.

Fig. 18 Nugget mapped profile.

1416 Optical Engineering, Vol. 41 No. 6, June 2002

,t

sa

n

The system was implemented and tested on a 166-MPentium computer, using the C11 programming languageThe control code is interrupt driven. The computer systmultitasks between two programs: a slow outer loop~whichhandles subsystem task scheduling and interaction ofsystem with the user!, and a faster time-critical inner control loop ~which processes the encoder information and pduces an output control commands for the manipulagripper and the vision systems!. Information is passedbetween the two loops via data latching and semaphSince the outer loop can be interrupted at any time, incling while writing data to memory, it is necessary to setstrict guidelines about the validity of data being transferrbetween the two programs. This is an added complicainherent in multitasking or parallel processing.

Since the crucible charging environment can be daaged rather easily, it is very important to allow user intaction to alter the manipulator’s behavior on-line. The stem is able to pack nuggets at an average rate of 1 nuevery 10 seconds, reaching a charge density of 50% for45.7-cm-diam crucibles, allowing the system to stay copetitive with human packing.1,2

Fig. 19 Nugget mapped section profile: (* ) profile and (o) errorbased on coordinate measuring machine data.

Fig. 20 Experimental system.

otic-

er-g

pro

ug-ys-

hasIt isli-al

s an-albeintheracrmys-5.7ints

in-co-kyo

ta-

ofck-

mnu-

tryca-

tro-r tri

of-ork

ure

n-

ter

us-n,ionn,

n-

6-

m-n-ical

p-.

or

e

,

ile

rs,’’

a-

Sujan and Dubowsky: Design and implementation . . .

6 Conclusions

The technology has been developed to enable the robpacking of crucibles in CZ semiconductor wafer prodution. Benefits of the system include elimination of nongonomic working conditions, shorter crucible packintimes, greater crucible packing consistency, and higherductivity and reduced costs.1 A vision system to providevisible surface geometries of individual nuggets and ngets packed in the crucible is a key component of this stem.

An optical 3-D surface geometry measuring systembeen developed based on active laser triangulation.found to be within the design constraints of the CZ appcation. It measures the geometry profile of the individunuggets being placed and the landscape of the nuggetready in the crucible, with a resolution of 1 mm and scaning times of 2.5 and 4.5 s, respectively, with a nomindata acquisition rate of 15 KHz. While this speed canincreased relatively directly by low cost improvementsthe basic system hardware, this is sufficient to meetpacking system requirements. This study shows that a ptical and effective vision system can be designed to perfoimportant and realistic industrial tasks. The integrated stem has achieved charge densities of about 50% for 4cm-diam crucibles. Required precision with cost constrahas been demonstrated.

Acknowledgments

The technical and financial support of this work by ShEtsu Handotai Co. is acknowledged. Also the technicaloperation of Professor Y. Ohkami and his team at the ToInstitute of Technology is much appreciated.

References1. V. A. Sujan, S. Dubowsky, and Y. Ohkami, ‘‘Design and implemen

tion of a robot assisted crucible charging system,’’Proc. IEEE Intl.Conf. Robotics Automation,2, 1969–1975, San Francisco, CA~Apr.2000!.

2. V. A. Sujan, S. Dubowsky, and Y. Ohkami, ‘‘Robotic manipulationhighly irregular shaped objects: Application to a robot crucible paing system for semiconductor manufacture,’’SME J. Manuf. Pro-cesses3~3!, in press~2002!.

3. V. A. Sujan and S. Dubowsky, ‘‘Application of a model-free algorithfor the packing of irregular shaped objects in semiconductor mafacture,’’ Proc. IEEE Intl. Conf. Robotics Automation, 2, 1545–1550,San Francisco, CA~Apr. 2000!.

4. V. A. Sujan and S. Dubowsky, ‘‘The design of a 3-D surface geomeacquisition system for highly irregular shaped objects: with applition to CZ semiconductor manufacture,’’Proc. IEEE Intl. Conf. Ro-botics Automation,2, 951–956, Detroit, MI~May 1999!.

5. S. Alkon, ‘‘Measurement and inspection using non-contact elecoptical systems: non-contact gauging and measurement with laseangulation,’’Proc. SPIE1038, 28–35~1989!.

6. G. Bazin, B. Journet, and D. Placko, ‘‘General characterizationlaser range-finder optical heads,’’Proc. LEOS ’96 9th Annual Meeting, 1, 232–233, IEEE Lasers and Electro-Optics Society, New Y~1996!.

7. P. J. Besl, ‘‘Active optical range imaging sensors,’’Mach. Vision Appl.1~2!, 127–152~1989!.

8. R. L. Dalglish, C. McGarrity, and J. Restrepo, ‘‘Hardware architectfor real-time laser range sensing by triangulation,’’Rev. Sci. Instrum.65, 485–491~1994!. .

9. J. A. Davis, E. Carcole, and D. M. Cottrell, ‘‘Range-finding by triagulation with nondiffracting beams,’’Appl. Opt. 35, 2159–2161~1996!.

10. R. A. Jarvis, ‘‘A perspective on range finding techniques for compu

c

-

l-

-

-

-

vision,’’ IEEE Trans. Pattern Anal. Mach. Intell.PAMI-5 ~2!, 122–139 ~1983!.

11. B. Journet and G. Bazin, ‘‘Laser range-finding techniques for indtrial applications,’’28th Int. Smp. Automotive Technol. Automatiopp. 369–376, Dedicated Conf. Robotics, Motion and Machine Visin the Automotive Industries, Automotive Automation Ltd., CroydoUK ~1995!.

12. S. Monchaud, ‘‘Contribution to range finding techniques for third geeration robots,’’ Intell. Autonomous Syst., pp. 459–469, North-Holland, Amsterdam, Netherlands~1987!.

13. R. G. Rosandich,Intelligent Visual Inspection,1st ed., Chapman andHall, London~1997!.

14. T. A. Bossiuex,Integration of Vision and Robotic Workcell,AmericanInstitute of Aeronautics and Astronautics Publication, AIAA-94-118CP, Ford Motor Company~1993!.

15. W. J. Hsueh and E. K. Antonsson, ‘‘An optoelectronic and photogramertric 3-D surface geometry acquisition system,’’ ASME Winter Anual Meeting on Instrumentation and Components for MechanSystems, EDRL-TR92a~Nov. 1992!.

16. T. Kanade, H. Kano, S. Kimura, A. Yoshida, and K. Oda, ‘‘Develoment of a video-rate stereo machine,’’Proc. Intl. Robotics Syst. Conf~IROS’95!, August 5–9, Pittsburgh, PA~1995!.

17. K. A. Tarabanis, P. K. Allen, and R. Y. Tsai, ‘‘A survey of sensplanning in computer vision,’’IEEE Trans. Rob. Autom.11, 86–104~1995!.

18. Y. Tanaka, A. Gofuku, and M. Abdellatif, ‘‘Video-rate 3-D rangfinder using RGB-signals from binocular cameras,’’Proc. FourthIASTED Intl. Robotics Manuf.,pp. 166–168, IASTED-Acta PressAnaheim, CA~1996!.

19. R. C. Luo and R. S. Scherp, ‘‘3D object recognition using a moblaser range finder,’’IEEE Intl. Workshop Intell. Robots Syst.IROS’90,673–677~1995!.

20. M. Rioux, ‘‘Laser range finder based on synchronized scanneAppl. Opt.23~21!, 3837–3844~1984!.

21. E. Hecht,Optics, 2nd ed. Addison Wesley, Reading, MA~May 1990!.22. R. Holmes and C. Cummings, ‘‘How to choose the proper illumin

tion source,’’Techspec3 ~1997-1998!.

Vivek A. Sujan received his BS in physics and mathematics summacum laude from the Ohio Wesleyan University in 1996, his BS inmechanical engineering with honors, from the California Institute ofTechnology, 1996, and his MS in mechanical engineering from theMassachusetts Institute of Technology in 1998. He is currently com-pleting his PhD in mechanical engineering at M.I.T. Research inter-ests include the design, dynamics, and control of electromechanicalsystems; mobile robots and manipulator systems; optical systemsand sensor fusion for robot control; and image analysis and pro-cessing. He has been elected to Sigma Xi (Research Honorary So-ciety), Tau Beta Pi (Engineering Honorary Society), Phi Beta Kappa(Senior Honorary Society), Sigma Pi Sigma (Physics Honorary So-ciety), and Pi Mu Epsilon (Math Honorary Society).

Steven Dubowsky received his bachelor’s degree from RensselaerPolytechnic Institute of Troy, New York, in 1963, and his MS andScD degrees from Columbia University in 1964 and 1971. He iscurrently a professor of mechanical engineering at M.I.T. He hasbeen a professor of engineering and applied science at the Univer-sity of California, Los Angeles, a visiting professor at CambridgeUniversity, England, and a visiting professor at the California Insti-tute of Technology. During the period from 1963 to 1971, he wasemployed by the Perkin-Elmer Corporation, the General DynamicsCorporation, and the American Electric Power Service Corporation.His research has included the development of modeling techniquesfor manipulator flexibility and the development of optimal and self-learning adaptive control procedures for rigid and flexible roboticmanipulators. He has authored or coauthored nearly 100 papers inthe area of the dynamics, control, and design of high performancemechanical and electromechanical systems. He is a registered Pro-fessional Engineer in the State of California and has served as anadvisor to the National Science Foundation, the National Academyof Science/Engineering, the Department of Energy, and the USArmy. He has been elected a fellow of the ASME and is a member ofSigma Xi and Tau Beta Pi.

1417Optical Engineering, Vol. 41 No. 6, June 2002