Visual Inspection System Design

9
Any visual inspection system must incorporate process control, part handling, sensing, image processing, and flaw analysis. This general model shows how these functions are implemented. Visual Inspection System Design Gilbert B. Porter III Joseph L. Mundy General Electric Company Most large industrial firms now support a group of computer scientists and optical and mechanical engineers working to apply techniques of visual pat- tern recognition to problems in inspection. This in- terest in automatic visual inspection systems has become a worldwide trend, with projects in most in- dustrialized countries. The increasing interest in automatic inspection stems from the availability of low-cost processors and memory and solid-state imaging devices. These components have allowed implementation of image processing algorithms which were not cost-effective on mini or mainframe computers, and their flexibility has made applications such as non-contact gauging directly accessible to manufacturing groups. Tech- niques which have already succeeded in production inspection systems are emphasized in this survey of the basic approaches to inspection system design. Motivation First, consider the motivation for developing an automatic visual inspection system. Significant cost savings can be realized in high-volume production lines by eliminating or reducing expensive manual in- spection. Improved yield or product quality are more often the incentives in low-volume lines, where a direct cost savings cannot be attained on sheer volume. A similar motivation occurs in processes where high costs prohibit product inspection at multiple phases in the manufacturing cycle, thus resulting in poor yield. Often, defective raw materials or com- ponents are not discovered until the final item is pro- duced. Processing raw materials can produce hostile conditions in which human inspection is infeasible and simple instrumentation is ineffective. These non- recoverable costs of poor yield can be reduced by an automated system at each phase in the manufactur- ing cycle. Generic model The general problem of visual inspection is best described in terms of a system model. Figure 1 shows the basic components of the model and indicates the major information paths. 0018-9162/80/0500-0040800.75 © 1980 IEEE .................. ......................... W COM PUTER

Transcript of Visual Inspection System Design

Page 1: Visual Inspection System Design

Any visual inspection system must incorporate process control,part handling, sensing, image processing, and flaw analysis. This

general model shows how these functions are implemented.

Visual InspectionSystem Design

Gilbert B. Porter IIIJoseph L. Mundy

General Electric Company

Most large industrial firms now support a groupof computer scientists and optical and mechanicalengineers working to apply techniques of visual pat-tern recognition to problems in inspection. This in-terest in automatic visual inspection systems hasbecome a worldwide trend, with projects in most in-dustrialized countries.The increasing interest in automatic inspection

stems from the availability of low-cost processorsand memory and solid-state imaging devices. Thesecomponents have allowed implementation of imageprocessing algorithms which were not cost-effectiveon mini or mainframe computers, and their flexibility

has made applications such as non-contact gaugingdirectly accessible to manufacturing groups. Tech-niques which have already succeeded in productioninspection systems are emphasized in this survey ofthe basic approaches to inspection system design.

Motivation

First, consider the motivation for developing anautomatic visual inspection system. Significant costsavings can be realized in high-volume productionlines by eliminating or reducing expensive manual in-spection. Improved yield or product quality are moreoften the incentives in low-volume lines, where adirect cost savings cannot be attained on sheervolume.A similar motivation occurs in processes where

high costs prohibit product inspection at multiplephases in the manufacturing cycle, thus resulting inpoor yield. Often, defective raw materials or com-ponents are not discovered until the final item is pro-duced. Processing raw materials can produce hostileconditions in which human inspection is infeasibleand simple instrumentation is ineffective. These non-recoverable costs of poor yield can be reduced by anautomated system at each phase in the manufactur-ing cycle.

Generic model

The general problem of visual inspection is bestdescribed in terms of a system model. Figure 1 showsthe basic components of the model and indicates themajor information paths.

0018-9162/80/0500-0040800.75 © 1980 IEEE

..................

......................... W

COMPUTER

Page 2: Visual Inspection System Design

Overall control of the inspection procedure is per-formed by theprocess control system. It coordinatesmajor timing functions, processes operator com-mands, and controls the system-wide data base. Itstask is to implement the normal inspection sequenceand handle exceptions; it can range in complexityfrom a few logic circuits to a real-time softwaremodule. An increasingly significant function is theoperator interface since the infusion of inspectionsystems into the manufacturing environment re-quires simple and reliable operating procedures.The remaining modules, discussed in detail below,

are specialized and vary considerably in practical im-plementation. In general terms, the part handlingsystem acquires the part under test and restricts ormeasures its alignment during inspection. It alsohandles the disposition of parts according to inspec-tion categories.The sensing system provides the illumination

source for the part, optics to control the illumination,the imaging optics, and the necessary support elec-tronics to interface the detector. Special-purpose sen-sor systems, such as optical transform or scanningtriangulation sensors, may require sophisticated op-tical systems, involving components such as scan-ning mirrors, eliptical mirrors, and holographicgratings.The imageprocessing system extracts pertinent in-

formation from the image data. This module inessence performs the task of isolating "interesting"areas for further analysis.The flaw analysis system interprets the areas iden-

tified by the image processing system and deter-mines the part classification based on the flaw deci-sion rules.

Part handling system. Transporting parts throughthe system is one of themost critical aspects of the in-spection process. Reliable acquisition and accuratepositioning of the part to be inspected can greatlyreduce the complexity of other system processes. Inimage analysis, for example, if the part location is notwell known, a larger image must be used to accom-modate the range of part positions, thus increasingthe image processing load.At present, part feeding and orientation usually

follow conventional automation techniques and areapplication dependent. Vibratory bowls (Figure 2a)and conveyor belts are typical feeding mechanisms,and in some systems even hand loading into inspec-tion fixtures is acceptable. A great deal of develop-ment effort is being applied to multi-axis manipula-tors for part handling, and methods to enhancefeeding reliability are being developed.I

The most common technique for part orientationuses fixtures to provide a reference to the part coor-dinate system. Figure 2b shows a typical fixture formeasuring hole geometry in castings.In highly automated systems parts are often

transported by conveyer belts, indexing turrets, orrotating disks. Figure 2c shows small-part transportusing a glass disk to allow silhouette backlighting.

Following the load/orientation operation, the partmust be scanned by the visual sensor. The requiredmanipulation components are determined by the in-spection criteria and by the choice of sensor scanningmechanisms. For example, the inspected area can bescanned a point at a time using illumination motionrelative to the part and a simple photomultiplierdetector. The motion of the point illumination can beobtained by moving the part alone, the sensor/illu-mination alone, or the two in combination. Ifthe partarea to be inspected is visible from a single vantagepoint, a self-scanned two-dimensional sensor can pro-vide the entire scanning motion, thus eliminatingmechanical position control. An intermediate groundbetween these cases is the linear diode array sensorwhich takes a single line image. Area images can becreated by moving the part relative to the sensor asabove. This arrangement often proves to be the bestchoice if the entire inspection area cannot be capturedwith a rectangular 2-D sensing array. Multiple posi-tioning of a rectangular array to assemble a com-posite image is often as complex as the motion re-quired to mechanically sweep out an image using alinear array. Additionally, the much higher resolu-tion available in linear arrays enables scanning largerareas. For example, the largest 2-D array readilyavailable is 256 X 256 elements while the largestlinear array is 2048 elements.Following the scanning process the part must be

unloaded, possibly to several locations, based on itsclassification. If the system is adapted to an existingtransport mechanism such as a conveyor, the disposi-tion station may simply be an appropriate point inthe transport system.

May 1980 41

Page 3: Visual Inspection System Design

Sensing system. The sensing system acquiresvisual image data from the part under test. The ac-quisition method, data content and form, and scan-ning schemes are determined by the inspection prob-lem. Generally, problems fall into two broad overlap-ping categories: dimensional and surface featuremeasurement.

Strictly dimensional problems focus on measuringthe geometry of the part with little regard for surfacecondition while surface feature problems concentrateon the condition, texture, color, etc., of the surfacewith little regard for location. Coupled problems,such as measuring the dimension of a colored area orthe size and location of surface defects, fall in thelargely unexplored middle ground.The implementation of dimensional measurement

systems is generally straightforward, and the tech-niques are well developed. Within the dimensionalcategory two general problem types occur: unoc-cluded and occluded. Unoccluded problems lead tothe simple solution in which a silhouette of the objectboundary is formed using collimated light. This is notpossible for occluded areas. Figure 3 shows a silhou-ette measurement configuration for non-occludingand occluding objects. The distance d, the maximumdimension of the part in a direction parallel to the sen-sor, equals the distance s, the size of the shadow atthe sensor in the construction shown. This relationcan be applied in two dimensions by using a col-limated flood illuminator and a rectangular sensor.Accuracy of one part in 104 and resolution of 10-4 cmcan be obtained with an incandescent illuminationsource driving a collimator. Better accuracy, one partin 106 and resolution of 1o-, cm, can be obtainedusing a mechanically scanned laser as the illuminator(see Figure 4).This scheme can be applied with high precision

over 5-inch to 10-inch fields using time intervalmeasurement rather than direct measurement of theshadow dimension. The shadow is converted toa timeinterval by focusing the deflected laser spot onto aphotodetector and measuring the time during whichthe spot is blocked. If the exact position of the spot isknown for all positions of the scanning system, thenknowing the position of the scanning system and thestate of the photodetector allows reconstruction ofthe shadow and thus the part dimension. Since thelaser source can be made highly collimated, thistechnique produces very reliable results.Additional precision can be obtained by carefully

controlling the threshold detection process to pro-duce reliable triggering, and by premeasuring therelation of the exact beam vs the scanning position ata series of calibration points. The process ofmappingthe time intervals back to the dimension is im-plemented by a calibration table lookup followed byan interpolation operation. Using these improve-ments, resolution of 10-6 cm has been claimed undercontrolled conditions.Systems for measuring occluded dimensions are

much more varied and application oriented. Perhapsthe most widespread technique is that of triangula-

tion by reflected light.2'3 Figure 5 shows a simpletriangulation system for measuring distances.The surface displacement AZ from a nominal posi-

tion is measured byprojecting a small intense spotonthe surface AA. The image of the spot at the sensorwill deviate from its nominal position by

Ad = AZMtan 0,whereMis the optical magnification. SinceMand 0are constant for small AZ, thenAZ = KAd, whereK =(M tan6)-1.This scheme has been extended to detectmanypro-

file points in parallel by using spatially modulatedlight. Figure 6 shows the projection of an alternatinglight and dark line pattern on a surface. The distor-tion of the line can be used to reconstruct the surfaceprofile variations in the direction perpendicular tothe lines by triangulation in that direction. Extend-ing the pattern to a grid4 allows complete reconstruc-tion at grid line intersections.An interesting variation to this technique uses a

masked sensor (or a computer simulation ofamaskedsensor) to create moire interference fringes whichvary in density as a function of the slope.5Surface feature measurement has widely varied ap-

plications and is largely problem dependent inmethod. The applications range from color measure-ment to profilimetry, including the detection ofcracks and pits and the measurement of surfaceroughness.A scheme for detecting surface color is shown in

Figure 7a. By viewing the surface through a colorfilter matched to the paint color, a contrast differencecan be obtained. Alternately, if the unpainted areasare highly reflective, a ratio of broad band to narrowband reflected power can be used with a wide band il-luminator to provide the desired contrast.A second example, shown in Figure 7b, is a system

to detect holes in a metal surface with widely varyingreflectivity. In this case, a careful analysis of the op-tics reveals that the reflected signal P-R varies as afunction of surface roughness (see Figure 8).The proper choice of 6= 0, can minimize the varia-

tion of surface roughness and provide high contrastto hole data. Note, no information about the detailedsurface profile is obtained. More detailed analysisfrom an additional observation position can providesome surface information based on the ratio of re-flected signals from two observation points-as shownin Figure 9.

If both observation points are in a known relationto the illumination, say 01 = 62 =450, then the slopeof the surface dz/dx can be computed as a ratio of theobserved intensities at P.The detection of small pits and scratches in a sur-

face of varying reflectivity is a similar problem butmay require dimensional accuracy beyond that at-tainable with directional illumination techniques.Two approaches have had significant success in theability to process large areas at high throughput.The most direct method for surface inspection is an

enhanced version of the dimensional triangulationsystem. The sensing/manipulation system can be

COMPUTER42

Page 4: Visual Inspection System Design

070-063

051

SENSOR

i

7

Page 5: Visual Inspection System Design

Figure 9. Ratio of reflected signals from two observationpoints can provide surface profile information.

Figure 10. An optical transform used to inspect for very small defectswithin a large surface area.

linked to track the surface and provide relative in-dication of surface position. Speed is limited to about4000 points per second with current techniques, butrapid progress is being made toward increasing thisrate.Amore indirect method ofinspection forvery small

defects within large surface areas uses an opticaltransform as shown in Figure 10. This system per-forms a 2-D Fourier transform of the surface image atthe detector. Special detector geometries are avail-able to optically average the transform along thepolar coordinate axes, thereby reducing the imageprocessing load. This technique lends itself to thedetection of surface features or silhouette featureswhich can be discriminated on the basis of spatial fre-quency. Typical applications include regular geom-etries, such as very smooth surfaces and edges, orregularly patterned structures, such as cloth anduniform particulate materials. Small variations inregularity are easily detectable, provided detailedlocal information is not required. Applications wheremoderate nonuniformity is an acceptable conditionare quite difficult since the diffraction patternbecomes unstructured and thus difficult to interpret.

Image processing. The image processing systemconverts the raw sensor image data into imagefeatures, which are selected according to partclassification requirements. The feature extractionprocess follows two general methods: binary andgray level.Binary methods proceed by converting a gray scale

image of many intensity levels to a binary image oftwo levels (0,1) by comparing each pixel to a thresh-old. This conversion can be represented as

B (x,y) = ° G(xwy) < 01: G(x,y) > 0

A prerequisite for binary conversion is a high-contrast image such as that derived from a silhouettesensor. For images with poor contrast it is difficult toderive a threshold which produces contiguous re-gions meaningfully related to the original imagegeometry. Figure 11 shows two examples of graylevel images with their corresponding binary images.In Figure 1 la, the good contrast in the original imageresults in a good rendition (llb) of the originalgeometry for the chosen threshold. The second ori-ginal (llc) exhibits poorer contrast, and the corre-sponding binary image (lld) shows poor correlationat all threshold values.A more powerful analysis involves defining multi-

ple local thresholds; the extreme case defines athreshold for each pixel. If, however, the image condi-tions are fairly stable, thediagram in Figure 12 showsa way to establish a threshold for each pixel byaveraging a no-signal frame with a stable referenceframe. The threshold value 0, is midway between thecorresponding pixels of the two frames. This simplemethod works quite well if the position and lightingof the test parts are repeatable, but any significantvariation in part alignment will produce correspond-ing errors in the binary image. For example, if the testpart were a checkerboard, the misalignment by onesquare would produce a completely incorrect binaryimage.Once a suitable binary image is obtained, features

relative to the inspection problem must be extracted.Several general techniques have met with success inindustrial applications. The simplest is windowing.6Figure 13 shows a scene divided into a number ofpossibly overlapping rectangular regions. Each re-gion is treated separately to extract such informationas the number of pixels which are on (logic 1). Com-binatorial operations are allowed between windows.The result is a feature vector representing the image.The limitation of this approach is the inability to

segment arbitrarily shaped regions. Such regions aregenerally defined by contiguous pixel class groupscalled blobs. Figure 14a shows a binary image sub-divided into blobs. The dark area represents the ex-terior of a blob, the gray represents the interior, andthe white denotes the blob border. From this informa-tion the spatial moments of each blob can be com-puted. The ijth momentof a blobB(x,y ) is defined as

co yMij = f f xi yi B(x,y) dx dy

_C o

COMPUTER44

Page 6: Visual Inspection System Design

MEMORY

SENSOR

IDMnKJITnP

A/Ur'nKN/rQTCFR ADDER

COMPARE

While the entire set ofmoments completely describesthe blob, the first few are generally the most usefulsince they are easy to compute and carry relevant in-formation. The zeroth moment Moo is the area of theblob which corresponds to counting the border and in-

terior pixels of the blob. The first moments MO, andMlo are useful to define the centroid of the blob. Theratio of perimeter to area gives an indication of shape.An alternate descriptor of shape is the blob skele-

ton. Figure 14 b-c shows a blob border and two forms

May 1980

D

45

Page 7: Visual Inspection System Design

of internal skeletons. Simple skeletons can be con-structed from the medial axis.7 More complex llskeletons, often used in medical applications, can beconstructed from the border and capture the com-plete shape of the blob.

In some cases such as dimensional inspection, therelation between features becomes important. The 0-relation between blobs or the topology can be |recovered as the blobs are constructed by forming a .relational tree which is referenced at each border lcrossing. Figure 14 d-e shows such a tree along with |the associated blobs.As mentioned earlier, high contrast is necessary for

a good binary image but is not always attainable.Gray level image processing enhances the contrasteither directly or by reformatting the features into amore detectable form. Three generic methods havebeen successful: projection, correlation, and func- Figure 15. Projection derives a one-dimensional image bytional enhancement. summing pixels along a given axis.Projection derives a one-dimensional image line

from a two-dimensional image by summing pixelsalong a given axis.8 For example, the data in each ver-tical column of an m row by n column image could besumnaed to an n element line. Figure 15 shows anoriginal image and a line summed vertically in the in-dicated area. This technique can be applied to gray g ._level images as shown in the hole measurementsystem or to binary images as in the reed relay ...~~..system. Projection in multiple directions cansubstitute for more complex analysis in two dimen-Esions. .In applications requiring more subtle detection of _

features such as orientation of a shape, correlation_provides a global method of obtaining an optimal fit_in the presence of noise. Two-dimensional correlationofa known shape S(x,y, 0) overanimageG (x,y) can be Figure 16. Signal recovered by functional approximation.

COMPUTER46

Page 8: Visual Inspection System Design

represented asn m

C(x,y,e) = I I G(x,y) S(x+4y+40)i=-n j=-m

where 0 is an orientation vector. This is a very costlycomputation if performed at each point in the imageG (x,y). If potential match locations are first locatedby a less costly method, then correlation can be ap-plied to refine the exact location. This technique is il-lustrated in a number of applications.9,10A reasonable compromise between extensive gray

scale image processing and binary processing is tofirst enhance the gray scale image contrast to select asimple threshold. If sufficient information is knownabout the image data, functional approximation canremove unwanted background." Figure 16 shows asingle line image which consists of a signal, the highfrequency information, superimposed on a slowlyvarying curved background. The companion imageshows just the signal whichwas recovered by approx-imating the line by a second-order polynomial overlarge distances and subtracting the predicted centerpoint of each approximation. This operation can beimplemented simply as a nonrecursive digital filter oflength n, the width of the approximation.

Flaw analysis. The success of automatic inspectiondepends on having considerable a priori informationabout the part. The mostcommon representations forinspection-templates, grammars, and geometricrelations-provide models for decisions regardingpart quality.Templates are effective if the relationship of a

significant feature of the part is well known and con-sistent from part to part. A good example of this ap-proach, shown in Figure 17a, is a corner templatemask used to determine IC location.10 Another tem-plate model,12 shown in Figure 17b, uses circles withvariable diameters and center locations to model aseries of holes. The edge boundaries are modeled byparallel lines, and the locations are determined bycluster analysis of edge points.The use of syntactic models or grammars is ap-

plicable to one-dimensional image forms such asmedial axis and gray level projections. These dimen-sional curves can be described by a sequence ofstraight lines represented by a sequence of symbols.The symbol sequence is parsed to determine the loca-tion of significant features in the data.7 Figure 17c isa straight line approximation of gray level data forhole boundary detection; it also shows a simplifiedcontext-free grammar which defines basic states,such as hole boundaries, of the image data.Both template and syntactic analysis can produce

geometric information about the part. The finalstages of inspection analysis may involve the com-parison of drawing requirements with the geometricrelations extracted from the image. In the holemeasurement application, the model yields such dataas hole diameter, wall thickness, and hole spacing,which can be compared directly with drawingspecifications.

Future directions

An article of this nature would not be completewithout some speculation about future trends, andanexpected number of areas seem ripe for rapid near-term progress.New sensor configurations, combining integrated

optics and fiber optic techniques, should providemore efficient coupling of illumination and imagingof internal part regions. Research efforts are alreadyunder way to coordinate sensor and part motion toachieve inspection of complex geometries. This willinvolve general multi-axis manipulation with con-tinuous path control. The path specifications will bederived by evolving CAD tools, interactive graphictools resembling those now in use for developingnumerical control files for NC machines. However,close attention will be paid to optimizing part scan-ning time as a function of part geometry andmanipulation constraints. These control strategieswill necessarily include visual sensor feedback toallow surface tracking of normal variations in partsurfaces from the control model.Obviously, rapidly decreasing computer and digi-

tal hardware costs will encourage proliferation ofautomatic inspection systems, as well as making im-plementation of more general image processing hard-ware economically practical. This poses a corre-sponding software engineering challenge. The pro-

May 1980

Page 9: Visual Inspection System Design

gramming of inspection procedures is already costly;nevertheless, entry of new part descriptions must besimplified. In future systems, the inspection criteriamust be provided naturally in accordance with engi-neering design standards, without requiring userknowledge of the details of inspection technology.This maturing of visual inspection technology willbring automated inspection to broad reaches of in-dustrial applications.M

References

1. G. Boothroyd and A. M. Redford, MechanizedAssem-bly, McGraw Hill, London, 1978.

2. G. Agin and T. Binford, "Computer Description ofCurved Objects," Proc. Third Int'l Joint Conf. Ar-tificial Intelligence, 1973, pp. 629-640.

3. David P. Himmel, "A Laser Measuring System forAutomatic Industrial Inspection," Proc. Fourth Int'lJoint Conf Pattern Recognition, Nov. 1978, pp.952-954.*

4. P. M. Will and K. S. Pennington, "Grid Coding: APreprocessing Technique for Robot and Machine Vi-sion," Proc. Second Int'l Joint Conf Artificial In-telligence, 1971.

5. M. Idesawa, T. Yatagai, and T. Sonna, "ScanningMoire Method and Automatic Measurement of 3-DShapes,"Applied Optics, Vol. 26, No.8, Aug.1977,pp.2152-2162.

6. L. A. Branaman, "Recent Applications of ElectronicVision to Non-Contact Inspection," SME TechnicalPaper IQ79-549, Society ofManufacturingEngineers,Dearborn, Mich., 1979.

7. J. L. Mundy and R. E. Joynson, "Automatic Visual In-spection Using Syntactic Analysis," Proc. 1977 Conf.Pattern Recognition and Image Processing, pp.144-147.*

8. J. van Daele, A. Oosterlinck, and H. Van Den Berghe,"Automatic Visual Inspection of Reed Relays," Proc.SPIE Technical Symp. East, 1979, pp. 58-64.

9. M. Mese, I. Yamazabi, and T. Hamada, "AnAutomatic Position Recognition Technique for LSIAssembly," Proc. Fifth Int'l Joint Conf Artificial In-telligence, 1977, pp. 685-693.

10. M. L. Baird, "SIGHT-I: A Computer Vision Systemfor Automated IC Chip Manufacture," IEEE Trans.Systems, Man, Cybernetics, Vol. SMC-8, Feb. 1978,pp. 133-139.

11. J. J. Pearson, et al., "Automatic Inspection of Ar-tillery Shell Radiographs," SPIE, Vol. 155, ImageUnderstanding Systems and Industrial Applications,1978, p. 214.

12. G. B. Porter, T. M. Cipolla, and J. L. Mundy,"Automatic Visual Inspection of Blind Holes in MetalSurfaces," Proc. 1979 Conf. Pattern Recognition andImage Processing, pp. 83-86.*

*These proceedings are available from the IEEE Computer SocietyPublications Office, 5855 Naples Plaza, Suite 301, Long Beach, CA90803.

Gilbert B. Porter III is a member ofthe Visual Information Processing Pro-gram at the General Electric CorporateResearch and Development Center. Hisresearch interests include visual pat-tern recognition, automatic inspectionsystems, and robotics. Before joiningthe center, he was with GE's OrdnanceDepartment. Porter received his BSEEdegree in 1968 from Johns Hopkins

University and his MS degree in computer science in 1974from Rensselaer Polytechnic Institute, where he is current-ly a doctoral candidate.

Joseph L. Mundy is currently managerof the Visual Information ProcessingProgram at the General Electric Cor-porate Research and DevelopmentCenter. He has been with the centersince 1965 working in the. areas ofsuperconductivity, integrated circuits,visual pattern recognition, and auto-matic inspection systems. Mundy

f received his PhD in 1969 from Rens-selaer Polytechnic Institute, where he is currently an ad-junct professor in the Computer Science Department.

COMPUTER

Software/HardwareProfessionals

Norton, Kleven and Co., Inc., has a variety of positionsavailable in Programming, Compiler Development,Language Design Systems Engineering Hardware/Firmware. These positions are for professionals wellversed in their field, and are with some of the top com-panies in the Industry. If your experience covers one ormore of the following areas at the state-of-the-art level,contact us: Software Design and Development; Com-puter Sciences; Assembly or Higher Level LanguageProgramming; Data Base Design; Compilers, and/orOperating Systems Design; Digital Logic Design; Inter-face Design; Microprocessor Utilization; Computer Ar-chitecture; Firmware Design; Microcoding; SoftwareTools/Language Design.We've been providing confidential and industry-knowledgeable placement for software/hardware profes-sionals since 1969. We also provide you with free resumepreparation and career path counseling. Client com-panies assume all fees.

I nkNonKJev and Co,bycINDUSTRIAL RELATIONS MANAGEMENT CONSULTANTS

Three Fletcher Avenue, Leiington, Massachusetts 02173Telephone 16171 861-1020

Me..b.,|M ... hu-tt. Prof...ion.I Placemen- Con.ulinat.Nation.l Comput.r A.oo.ae.-R.pOtficEQN PYlo-widel| ~~~~Rpeep,tnlib Equ-l (>pportanihy E.oployero M F

48