High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction...

16
Analog Integrated Circuits and Signal Processing, 39, 251–266, 2004 c 2004 Kluwer Academic Publishers. Manufactured in The Netherlands. High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications ALEXANDER FISH 1 , DMITRY AKSELROD 2 AND ORLY YADID-PECHT 1 1 The VLSI Systems Center, Ben-Gurion University, Beer-Sheva, Israel 2 Motorola Semiconductor, Herzlia, Israel E-mail: [email protected] Received November 14, 2002; Revised September 17, 2003; Accepted September 25, 2003 Abstract. An approach for implementing a high precision image target centroid—center of mass (COM) detection system via an adaptive K -winner-take-all (WTA) circuit in conjunction with a 2-D dynamic element matching (DEM) algorithm implementation for image sensor arrays is proposed. The proposed system outputs a high precision COM location of the most salient target in a programmable active region of the field of view (FOV) for star tracking purposes and is suitable for real time applications. The system allows target selection and locking with multiple targets tracking capability. This solution utilizes the separability property of the COM, and therefore reduces the computational complexity by utilizing 1-D circuits for the computation. The DEM algorithm, commonly used in ADC and DAC circuits, allows reducing the required WTA circuit precision to 5–6 bits, while still achieving a high output precision. Simulation results prove the concept and demonstrate the high precision COM result. In addition, a possible low-level hardware implementation is described. Key Words: APS, CMOS imager, center of mass, COM, centroid, K-WTA circuit, K-Winner-take-all circuit, dynamic element matching, DEM, star-tracking, image processing 1. Introduction Visual tracking of salient targets in the field of view (FOV) is a very important operation in star tracking and navigation applications. Several issues should be addressed in tracking of salient targets [1]: (a) the prob- lem of selecting a target from the background, (b) the problem of maintaining the tracking engagement (tar- get locking) and (c) the problem of shifting tracking to new targets in cases when it is decided that another target in the FOV should be selected and tracked. In such applications, the location and magnitude of the selected target must be reported in real time. For small dimension targets in the FOV (small bright spots), tar- get locations are represented by local intensity peaks. Thus, in these cases the target can be selected using the winner-take-all (WTA) circuit which identifies the highest signal intensity among multiple inputs [1, 2]. The situation is much different when a large target con- sisting of a large number of pixels is acquired. In this case a target centroid—Center of Mass (COM), or the first moment of the target intensity distribution, repre- sents the position of a target. Thus, a high precision system for COM detection of silent targets in the FOV is required. In addition to the issues mentioned above, a number of properties should be examined in such a kind of a system. These include system suitability for real time applications, hardware requirements, the possibil- ity of implementation as a system-on-a-chip (“smart” sensor), the influence on the sensor spatial resolution (i.e. the fill factor), etc. The significance of tracking in machine vision, star tracking and navigation applications as well as in vi- sual attention tasks in biological systems triggered nu- merous efforts in these fields. Many works in this area have been published during the last two decades [1– 10]. A number of different COM detection circuits in conjunction with CMOS sensor arrays have been pro- posed [11–16]. In most previously proposed solutions image pro- cessing is performed at the focal plane level. Most of these designs use current-mode pixels, employing

Transcript of High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction...

Page 1: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

Analog Integrated Circuits and Signal Processing, 39, 251–266, 2004c© 2004 Kluwer Academic Publishers. Manufactured in The Netherlands.

High Precision Image Centroid Computation via an AdaptiveK-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching

Algorithm for Star Tracking Applications

ALEXANDER FISH1, DMITRY AKSELROD2 AND ORLY YADID-PECHT1

1The VLSI Systems Center, Ben-Gurion University, Beer-Sheva, Israel2Motorola Semiconductor, Herzlia, Israel

E-mail: [email protected]

Received November 14, 2002; Revised September 17, 2003; Accepted September 25, 2003

Abstract. An approach for implementing a high precision image target centroid—center of mass (COM) detectionsystem via an adaptive K -winner-take-all (WTA) circuit in conjunction with a 2-D dynamic element matching (DEM)algorithm implementation for image sensor arrays is proposed. The proposed system outputs a high precision COMlocation of the most salient target in a programmable active region of the field of view (FOV) for star trackingpurposes and is suitable for real time applications. The system allows target selection and locking with multipletargets tracking capability. This solution utilizes the separability property of the COM, and therefore reduces thecomputational complexity by utilizing 1-D circuits for the computation. The DEM algorithm, commonly used inADC and DAC circuits, allows reducing the required WTA circuit precision to 5–6 bits, while still achieving a highoutput precision. Simulation results prove the concept and demonstrate the high precision COM result. In addition,a possible low-level hardware implementation is described.

Key Words: APS, CMOS imager, center of mass, COM, centroid, K-WTA circuit, K-Winner-take-all circuit,dynamic element matching, DEM, star-tracking, image processing

1. Introduction

Visual tracking of salient targets in the field of view(FOV) is a very important operation in star trackingand navigation applications. Several issues should beaddressed in tracking of salient targets [1]: (a) the prob-lem of selecting a target from the background, (b) theproblem of maintaining the tracking engagement (tar-get locking) and (c) the problem of shifting trackingto new targets in cases when it is decided that anothertarget in the FOV should be selected and tracked. Insuch applications, the location and magnitude of theselected target must be reported in real time. For smalldimension targets in the FOV (small bright spots), tar-get locations are represented by local intensity peaks.Thus, in these cases the target can be selected usingthe winner-take-all (WTA) circuit which identifies thehighest signal intensity among multiple inputs [1, 2].The situation is much different when a large target con-sisting of a large number of pixels is acquired. In thiscase a target centroid—Center of Mass (COM), or the

first moment of the target intensity distribution, repre-sents the position of a target. Thus, a high precisionsystem for COM detection of silent targets in the FOVis required. In addition to the issues mentioned above, anumber of properties should be examined in such a kindof a system. These include system suitability for realtime applications, hardware requirements, the possibil-ity of implementation as a system-on-a-chip (“smart”sensor), the influence on the sensor spatial resolution(i.e. the fill factor), etc.

The significance of tracking in machine vision, startracking and navigation applications as well as in vi-sual attention tasks in biological systems triggered nu-merous efforts in these fields. Many works in this areahave been published during the last two decades [1–10]. A number of different COM detection circuits inconjunction with CMOS sensor arrays have been pro-posed [11–16].

In most previously proposed solutions image pro-cessing is performed at the focal plane level. Mostof these designs use current-mode pixels, employing

Page 2: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

252 Fish, Akselrod and Yadid-Pecht

photodiodes or phototransistors to produce a currentwhich is an instantaneous function of the incident light[17]. Typically, each pixel consists of a photo detectorand local circuitry, performing spatio-temporal com-putations on the analog brightness signal [5]. Thesecomputations are fully parallel and distributed, sincethe information is processed according to the locallysensed signals and data from pixel neighbors. This con-cept allows reduction in the computational cost of thenext processing stages placed in the interface. Unfortu-nately, when image quality and high spatial resolutionare important, image processing should be performedin the periphery. For example, in [2] we have pre-sented an APS with a 2-D WTA selection (performedin the APS periphery) implemented in 0.5 µm standardCMOS technology and achieving a fill factor of 49%with a 14.4 ∗ 14.4 µm pixel size. In [18] the in-pixelprocessing strategy has been chosen. In that work, a80 ∗ 80 µm pixel has been implemented in the sametechnology reducing the spatial resolution. However,more complex functions have been achieved.

In this paper, we propose a high precision 2-D COMdetection system that allows computation in the sensorarray periphery without affecting the sensor proper-ties. The proposed system can be integrated with anykind of a CMOS APS array for a system-on-chip ap-proach. The proposed system allows selecting a targetin a programmable active zone from the background,maintaining the tracking engagement (target locking)and shifting the tracking task to new targets when re-quired. The system outputs, in real time, the coordi-nates of the COM location of the most salient targetin the active region of the FOV. The general strategyof the active region selection, target locking and shiftof tracking to new targets in our system is similar tothe one proposed in [1], but the COM detection ar-chitecture described here differs from the previouslyproposed ones in some key features. The main fea-tures in our system are: (a) all computations are per-formed in the sensor array periphery, (b) the targetseparation from the background is performed via anadaptive, low precision mixed signal K-WTA circuit,(c) the final target COM coordinates are calculated us-ing the Dynamic Element Matching (DEM) algorithm,commonly used in ADC and DAC circuits as a digi-tal solution for reduction of the matching problems ofanalog elements, (d) the system allows COM calcula-tion of large bright targets. In order to orient the readerbetter, features (b) and (c) will be briefly introducedhere.

The usage of the WTA circuit in tracking systemsis not a new approach [2, 5, 9, 19, 23, 24]. Generally,an analog WTA circuit, which identifies the highestsignal intensity among multiple inputs, is one of themost important building blocks for neural networkshardware realizations [20–22], neuromorphic designs[5] and image processing applications [1, 2, 23]. Thefunction of the WTA is to accept input signals, com-pare their values and produce a high (or low) digitaloutput value corresponding to the largest input, whileall other digital outputs are set to a low (or high) outputvalue. Identically, the K-WTA circuit identifies the Kinput signals with the largest values. In tracking sys-tems, the WTA circuit can be used for (a) target se-lection, by location, of the absolute maximum in theentire image [1], (b) segmentation-based peak detec-tion, where the peak value within each target is deter-mined by the WTA and replicated in every pixel withinthe target [3], (c) selection and tracking—the targetwith the strongest spatial contrast selection, in systemsbased on contrast edges target detection [5]. In our pro-posed system, the K-WTA circuit is used for target se-lection from the background, similar to thresholdingand segmentation-based peak detection, as in [3]. Theadaptive nature of the WTA allows selection of thebright targets independent of their size. The mixed sig-nal WTA architecture was chosen for two reasons: (a) inthis way an adaptive K-WTA is achieved in a very sim-ple way, (b) simpler interface with the digital hardwareimplementation of the DEM algorithm is achieved. Itwill be shown, that the DEM algorithm allows reduc-ing the required WTA circuit precision to 5–6 bits,while still achieving a high precision of the outputresult.

The DEM method was presented in 1975 by Vande Plassche [26] as a mean of enhancing performanceof current sources. Actually, the digitally implementedDEM algorithm reduced errors caused by mismatchesin analog components of current sources by appropri-ate selection of different current sources every timeit was required. Shortly afterwards, in 1976, Van dePlassche described a Sigma-Delta A/D converter incor-porating the DEM [27]. Since then the DEM methodhas been used primary to enhance the performance ofDelta-Sigma D/A and A/D converters and is usually as-sociated with them. In this paper we describe the DEMapplication to enhance performance of a COM track-ing system while applying DEM in two dimensionsof the sensor array. On one hand, this presents an ap-proach different from the traditional applications of the

Page 3: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 253

DEM method. On the other hand, this presents the DEMmethod to an area where it has not been used before.

The general description of the proposed systemincluding the COM definition, computation require-ments, computation simplification, target selection,locking and moving to new targets principles and, fi-nally, the general DEM algorithm description are pre-sented in Section 2. The detailed description of theproposed COM detection system is shown in Section 3.Section 4 describes the performance of the system viasimulation results, compares our system with existingimplementations and outlines the required hardware forsystem implementation. Section 5 concludes the paper.

2. General Description

As mentioned above, the proposed system includes ap-proaches taken from different research areas, causingthe possibility that the reader is unfamiliar with part ofthem. The aim of this section is to orient the reader, be-fore the proposed system is described. First, we presentthe 2-D COM definition of a target, located in an M ∗ Nsize image and show the computation complexity of thedirect COM calculation. Secondly, we describe a pos-sible computation simplification using the separabilityproperty of COM for single target centroid calcula-tion in the FOV. Next, the general description of theproposed system is presented, the design choices areexplained and target selection, locking and the princi-ples for moving to new targets are described. The de-tailed introduction to the DEM algorithm, with focuson the DWA algorithm used in our system, concludesthis section.

2.1. 2-D COM Definition of a Target in an M ∗ NSized Image

This sub-section shows the theoretical 2-D COM defi-nition of a target in an M ∗ N image. Actually, in ourspecific case, the COM corresponds to the center ofbrightness of the target, located in the processed im-age, where every pixel has its own intensity level ofbrightness and defined coordinates. Assuming that theCOM computation is performed as the next stage af-ter target detection and non-target pixels inhibition, theCOM of the target is identical to the COM of the wholeimage.

Here we assume a given image of size M ∗ N pixels,where M and N are number of rows and columns inthe image, respectively.

The target COM coordinates are calculated asfollows:

Xcom =∑n

j=1

∑mi=1 (Ii j · j)

∑nj=1

∑mi=1 Ii j

(1)

Ycom =∑m

i=1

∑nj=1 (Ii j · i)

∑nj=1

∑mi=1 Ii j

(2)

where Xcom and Ycom are the X and Y center of masscoordinates respectively, j is the column index, i is therow index and Ii j is the column j and the row i pixelintensity.

2.2. Calculation Complexity Problemin Large Arrays

As can be seen from Eqs. (1) and (2) the COM compu-tation involves multiplication both in X and in Y axes.When applied to an image block of size M ∗ N , thetwo-dimensional COM calculation is of the order ofO(M ∗ N ). This explains the difficulty of COM im-plementation for large sensors arrays, especially whenreal time results are required.

If the image COM location is calculated via a di-rect method, as shown in Eqs. (1) and (2), the intensitylevels of all pixels in the frame should be stored inmemory. An analog storage (capacitors) for the pixelintensity levels is impossible for large arrays becauseof the huge area required and the low precision of ananalog memory. Thus an A/D conversion is necessaryfor every pixel value. If K bits represent the pixel val-ues, we need a K ∗ M ∗ N memory to store the wholeframe. For example, a system with a 2 M pixel array andan 8-bit A/D converter requires 2 M bytes memory. Inreal time implementation the frame read out and COMcalculation should be completed in 1/F seconds, whereF is the number of frames per second. In a standard realtime application F is 30 frames per second, so the max-imum allowed time for frame readout and COM calcu-lation is ∼30 msec. During this time the system shouldcomplete the following operations: read out from thesensor array, A/D conversion, storage of the intensitylevels of all pixels from the frame in memory, COMcalculation of X and Y and output of the COM result-ing coordinates. Thus, the COM calculation requiresfast access to memory and much on chip “computationpower” in order to complete the result calculation intime.

Page 4: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

254 Fish, Akselrod and Yadid-Pecht

2.3. Computation Simplification

Utilizing an important property of COM, namely sep-arability, can reduce the computational complexity,when the COM of a single convex target in the FOVis required. An M ∗ N COM can be calculated by per-forming M calculations of 1D-COM successively onall rows and then two additional calculations to obtainthe global X and global Y COM results. An N point1D-COM can be expressed as follows:

Xc =∑N

j=1 j · I j∑N

j=1 I j

(3)

where j is the column index and I j is the pixel intensity.For the global 2-D Y COM computation, the row

intensity summations are stored and another 1D COMcalculation is performed, retaining the complexity stillof order M , i.e. O(M).

For the global 2-D X COM computation, the col-umn intensities are calculated via the results from eachrow in a columwise manner. Then, another 1D COMcalculation gives the global 2-D X COM.

In spite of the fact that this possible COM computa-tion system uses the separability property of the COMand reduces the computation complexity to M (in thecase of an M ∗ N array), it still requires a large memorycapacity and a complicated computation mechanism,when implemented as a digital circuit. There is a pos-sibility to implement this COM calculation techniqueas an analog circuit [29], but in this case it will sufferfrom the matching problems of capacitors and resistivenetworks.

We propose here a COM detection system whichutilizes the separability property of the COM. Thus the

Fig. 1. (a) An input image with a target, (b) The “Selected target” consisting of K winners with very close values (the value differences areless then the WTA precision).

computational complexity is reduced by utilizing 1-Dcircuits for the computation. In addition, we proposeusing a low precision K-WTA circuit, while still re-taining a simple mechanism, i.e. the DEM algorithm,to achieve a high precision result. The detailed systemdescription will be presented in the next sections.

2.4. General System Description

Our tracking system optically receives an image, se-lects the most salient target in the defined region of theinput image, computes the COM of all pixels belong-ing to the selected image and continuously reports theCOM coordinates. The following terminology is usedin this paper: the target selected by the system is called“selected target” and the COM coordinates are called“target location”.

An adaptive, low precision K-WTA mixed signalWTA circuit is responsible for the target selection. Theinputs to the WTA circuit are voltage representationsof brightness levels of all pixels in the image. K pixelswith maximum values are chosen by the WTA, accord-ing to the WTA precision and target size. For example,for WTA precision of 50 mV, all pixels, having dif-ferences less of than 50 mV between their values andthe value of the brightest pixel could be chosen. Withhigher precision of the WTA, less target pixels willbe chosen for a given target. This target selection issimilar to the thresholding and the segmentation-basedpeak detection. The difference here is that the thresholdvalue is not constant and depends on the value of thebrightest pixel in the image. Figure 1(a) and (b) showexamples of the image with the target and the “selectedtarget” respectively.

Page 5: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 255

One of the design challenges is the correct choosingof the WTA precision. The most important informa-tion concerning the COM coordinates is located in thebright pixels of the target. Thus usage of a high pre-cision WTA, which chooses a single pixel or a smallgroup of pixels with the highest signal intensity, pre-vents other target pixels from participating in the fol-lowing COM calculation, causing an unacceptable er-ror in the COM location. The problem gets worse inthe case of a large target of interest, when the brightestpixel or pixels can be located very far from the pixelwhose coordinates represent the actual COM location.The choice of a low precision WTA prevents from asingle pixel or a small group of pixels with maximumvalues to be chosen across the large target which causean incorrect “selected target” and as a result a large er-ror in final target location. In addition, a low precisionWTA has a simpler implementation and less influenceof mismatch of analog components. On other hand,a very low precision WTA causes background pixelsto influence the result. This is a common segmenta-tion problem, where it is very difficult to choose someuniversal threshold that is suitable for all cases. Theinfluence of the WTA precision on the final precisionwill be discussed in Section 4.

After the target is selected, the next stage is targetlocation computation. The coordinates of the target arecalculated based on the pixels of the chosen target. Thesimplest way is to store all target pixels values withtheir coordinates to the memory and calculate theirCOM location in a brute force method (Eqs. (1) and(2)), as described above. But this approach requires alarge memory, especially in cases of large targets. Weuse the DEM method to complete the target locationcomputation. This digital method uses 2 pointers forpixel selection (row and column pointers) that dynam-ically choose pixels from the selected target, accord-ing to the algorithm, described in detail in Section 3.This 2-dimentional pixel selection is performed by twoone-dimensional DEM blocks, accomplishing a singlepixel selection, across the selected target, every timethe whole image is readout. After the image is read-out a number of times, i.e. the Over Sampling Ratio(OSR), OSR different pixels across the selected targetare chosen and the final target location is calculatedby averaging all pixels, chosen by the DEM algorithm.Thus, the image should be readout OSR times to receivethe final target location. The required OSR dependenceon the target size and its influence on the final systemprecision are widely described in Section 4.

In practical applications, there are often several tar-gets in the FOV, while the target of interest is not nec-essarily the strongest. Like in [1], the active region,within which the system tracks targets, is user-definedand implemented by the limitation of the DEM point-ers to this region only. If no active region is chosen,the system tracks the most salient target in the FOV.In lock mode, this region is system-defined (with thesame size pre-defined by user) and moves accordingto the target vector of motion. In order to acquire newtargets, the active region can be changed by the user atany time during the system operation. For presentationsimplicity, in the following system descriptions an ac-tive region is not chosen and the most salient target inthe FOV is tracked.

2.5. The DEM Algorithm—Introduction

This subsection introduces the reader unfamiliar withthe DEM approach and provides the necessary back-ground on its use. The Data Weighted Averaging(DWA) algorithm, the preferred implementation cho-sen in our case, concludes this subsection.

Delta-Sigma (DS) modulation is widely recognizedas a key technique for implementing high-resolutionD/A and A/D converters. A 1-bit internal D/A con-verter employed in both D/A and A/D converters doesnot require precision component matching due to itsinherent linearity. This high linearity enables imple-mentation of the converters using fabrication processeswith relatively large component variations. However,1-bit DS modulators have many disadvantages suchas limited resolution achieved at a given oversamplingratio and others [30]. To solve the above mentionedproblems multi-bit quantizers were used in DS con-verters, thus requiring use of multi-bit internal D/Aconverters. This enabled increasing overall resolutionwithout increasing oversampling ratio, by increasingthe number of levels in the DS quintizer [31]. Unfortu-nately multi-bit DS systems cannot achieve the requiredintegral linearity without the use of precise compo-nent matching. An attractive solution to the problemdue to its simplicity and cost-effectiveness, called Dy-namic Element Matching (DEM) was proposed [26].DEM techniques dynamically rearrange the intercon-nections of mismatched components, converting har-monic distortions resulting from elements mismatchinto a wide-bandwidth noise which then is filtered outfrom out-of-band region. Many DEM techniques have

Page 6: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

256 Fish, Akselrod and Yadid-Pecht

Fig. 2. The DWA algorithm principle for an 8-element system: theelement usage pattern. The dark elements are those selected out of 8possible elements depending on the input sequence (4, 3, 2, 2, etc.).

been proposed: Dynamic Element randomization [32],Dynamic element Rotation [33], Individual level aver-aging [34], noise-shaping [35] DWA and others.

The DWA algorithm, a modified version of which hasbeen adopted for use in the proposed system uses allthe DAC elements at the maximum possible rate whileensuring that each element is used the same number oftimes.

The DWA algorithm was developed by Jackson [28].In this method depicted in Fig. 2(a), when the verticalaxe represents time, the elements, which are selected,rotate throught the array. Each time N elements out of8 possible are selected, when N is an input to the DEMsystem. In this example, when the first input equals 4,the 4 most left elements are selected. The next inputvalue is 3, thus the next 3 elements are selected. Thethird input equals 2 therefore the one right-most andthe one left-most elements are both selected.

A similar principle is utilized in the proposed system.The differences are that here N always equals 1 andthat like in Fig. 2(b), instead of a time axis, we have acolumn axis. Thus the selected element is situated eachtime at another column. Another modification is theapplication of the DEM algorithm to two dimensions.

3. The Proposed System Architecture

Figure 3 shows the principle scheme of the proposedsystem. This system includes (a) the 2-D sensors arraywith a row decoder, (b) row comparators, (c) D/A con-trolled by a counter module, (d) a row winner detectionblock, which consists of a 1-D Row DEM module, (e)a “Row winners” storage with a 1-D column DWA and(f) a control unit, that is responsible for the clocks and

Fig. 3. The principle scheme of the 2-D system employing the DWAapproach.

control signals generation in the system. It should benoted that the WTA precision here is determined by theDAC precision.

In the proposed system the target of interest isscanned row-by-row using the Row Decoder. The re-sulting output is a row, consisting of voltage represen-tations of pixels brightness (a high voltage representsa high brightness level). The ramp module (D/A con-trolled by a counter) decreases the reference voltageapplied to the comparators from Vdd to 0V by stepscorresponding to the DAC precision. When the rampreaches the maximum value within the scanned row, itstops and the value detected is latched. At this point,in case of a large target, more than one comparatorwill detect a maximum value pixel in the row. Thiscauses K maximum target pixels in the scanned rowwith value differences within the D/A precision to bechosen by WTA. It was mentioned that only one tar-get pixel (called “target frame winner” in this context),that participates in the final target location computationby averaging all selected “target frame winners” fromOSR subsequent frames, should be selected by the sys-tem after the whole frame readout. Thus, the system

Page 7: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 257

chooses one out of K row winners (called “target rowwinner” in this context) in every row, using the 1-DRow DWA module. After the “target row winner” ischosen at each row by the 1-D Row DWA module, thecontrol unit stores its digital value and the column ad-dress in the corresponding place in the “Row winners”memory. At a certain stage (after scanning the wholeframe) all “target row winners” are stored in the “Rowwinners” and the “target frame winner” is selected outof all “target row winners” using the 1-D Column DWAmodule.

The “target row winners” and “target frame winners”are selected with assistance of the DWA “winner point-ers”. Both the 1-D Row DWA and 1-D Column DWAmodules have their own “winner pointer”. The “win-ner pointer” of the 1-D Row DWA (called “row winnerpointer” in this context) points to the column addressof the chosen “target row winner” of the last scannedrow. Similarly, the “winner pointer” of the 1-D ColumnDWA (called “column winner pointer” in this context)points to the row address of the chosen “target framewinner” of the last scanned frame. For the case whenan active region is not selected by the user, the resetvalue of the “row winner pointer” is the first column ofthe array and the reset value of the “target row winner”is the first row of the array. When an active region isdefined by the user, the reset value of the “row winnerpointer” and of the “target row winner” are the firstcolumn and the first row of the active region respec-tively. It should be noted that the “row pointer” valuesare limited by the defined active region edges.

“Target Row Winner” Selection Procedure. Aftersystem reset (in case of system startup or in case of anew active region definition), the “row winner pointer”is reset. After K -winners at the first row of the array areselected, the “row winner pointer” is increased, until itreaches the first winner out of the K chosen. This win-ner is selected as a “target row winner” of the currentrow. Thus the “target row winner” of the current row isthe first potential winner (out of K ) adjacent to the “rowwinner pointer” in the direction of pointer increase. Forexample, if the “row winner pointer” points to columnno. 4, and we have 5 potential winners in the currentrow (K = 5), with column addresses 2, 3, 4, 16 and18 respectively, the winner of the current row will bethe pixel with column address 16. Note that the “rowwinner pointer” value is not reset between subsequentframes, thus the previous frame influences the results ofthe next frame. After choosing the next winner, the win-

ner pointer receives its column address, moving onlyin one permanent direction upon which was decided upfront. It should be noted, that if an active region wasselected, the readout of a row out of the selected regiondoes not influence the “row winner pointer”.

Figure 4 shows the effect of the 2-D DWA algorithmapplied to the target in an active region of 6 ∗ 6 pixels.All pixels of the “selected target” are shown in gray andthe dotted line designates the contours of the target.

“Target Frame Winner” Selection Procedure. The“target frame winner” is chosen out of all “target rowwinners” in the same way as every “target row winner”out of K row winners. In this case a “column winnerpointer” has an identical function to that of the “rowwinner pointer” and is initialized to the first row ad-dress. Only “target row winners” with the same max-imum values are candidates to be selected as “targetframe winners”. Figure 5(a) shows the process of find-ing the “target frame winner” out of several candidatesfor 8 subsequent frames. These candidates are “targetrow winners”. As can be seen from Fig. 5(a) the “targetframe winner” for the first frame has coordinates (1, 1).In the next frame the column pointer advances to thenext row, thus the coordinates of the next “target framewinner” are (2, 3). Figure 5(b) shows coordinates of allframe winners in their appearance order.

To find the coordinates of the pixel that representsthe target COM we find the average of coordinates of all“target frame winners” received after repeated scanningof the target. Note that for a dynamic target, an imagemust be scanned OSR ∗ M times per second, where Mis the video frequency and OSR is the over samplingratio. To find the COM the target is scanned OSR times.Simulation results (included in Section 4) show that thefound estimation for the target COM is close to the the-oretical one. Similar to D/A and A/D converters, wherethe design can trade precision in time for precision inamplitude, precision of finding the COM is improvedby increasing the OSR.

4. Simulations and Hardware Requirements

A set of MATLAB simulations on a number of the-oretic and real images have been carried out in orderto examine the proposed algorithm for the COM cal-culation. The results are presented here. The precisionof the final result depends on a number of parameterssuch as the OSR, the target size and the chosen WTAprecision that is determined by the DAC precision.

Page 8: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

258 Fish, Akselrod and Yadid-Pecht

Fig. 4. 1-D row DWA algorithm stages, as applied to 4 sequentially scanned frames. The “selected target” pixels are designated in gray andthe “target row winners” in dark gray. The two right columns represent the coordinates of the “target row winners” (numbers in gray representcolumn coordinates and numbers in white row coordinates).

Fig. 5. The final stage: (a) 1-D column DWA frame by frame selection (“target frame winner” coordinates noted in dark gray), (b) Selectedcoordinates of former “target frame winners” (index of selection noted in dark gray).

In this section the relation between the OSR, theWTA precision, the target size and the error in the targetlocation are studied for the proposed system. Then analgorithm improvement for real time performance isproposed. Using this algorithm, the optimal OSR isfound for different target sizes, while remaining withinthe maximum allowed.

4.1. Algorithm Performance Dependenceon Different Parameters

This sub-section examines the influence of different pa-rameters, such as “selected targets” sizes and the cho-sen OSR on the algorithm precision. Here, all simula-tions were carried out on a theoretical circular “selected

Page 9: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 259

Fig. 6. COM location error vs. OSR for the theoretical circular“selected target” with a diameter of 17 pixels.

target”, as accepted after the WTA operation. The circlediameter ranges from 6 to 60 pixels.

Note that in this sub-section the WTA precision in-fluence on the COM precision is not yet discussed—allsimulations are performed on the “selected target” pix-els. Thus influence is examined in the next sub-sections.

Figure 6 shows the error in the COM location as afunction of OSR for a “selected target” with a diameterof 17 pixels. The error is calculated in relation to thetheoretically calculated COM coordinates. The COMlocation error has a sinusoidal behavior with a constantperiod of OSR = 17, that is wrapped in an exponen-tial envelope, as shown in Fig. 6. The period is deter-mined according to the “selected target” size (numberof rows), since an integer number of whole target sam-pling reduces the error. This error gets lower with thenumber of over samples; that is the reason for the wrap-ping “envelope”. For small over sampling ratios (OSR<17) there is a strong improvement in precision withOSR increase. Following this observation, it seems bestto work in the area of the first local minimum.

Simulations of the location error as a function ofthe number of iterations were carried out for variouscircle diameter sizes. The simulation results showed,as expected, that the required OSR for optimal systemoperation increases linearly with the target diameter.

Figure 7 shows the error in COM location for everyfirst local minimum of OSR as a function of the cir-cle diameter. It can be seen that the system has veryhigh precision. The final target location error ranges

Fig. 7. COM location error as a function of the circle diameter (forthe first local minimum of each).

from 0.2 to 0.42 pixels for large targets. For exam-ple, for targets with diameters of 50 pixels (occupying∼1800 pixels with the same illumination levels), thefinal precision error is 0.4 pixels for OSR = 48.

4.2. Algorithm Improvementfor Real Time Performance

Although the system demonstrates high precision, thehigh OSR required for large targets makes it imprac-tical in the case of large sensor arrays for real timeapplications. In these applications there is a maximalallowed number of iterations (OSR) that is determinedby the array size, the required number of frames persecond and the chosen WTA precision. Thus, there isa necessity for a DEM algorithm which will provideboth high precision in the COM calculation and lowOSR to be feasible.

In this sub-section an efficient algorithm for reducingthe required OSR is proposed. This algorithm allowsfinding the COM of large targets, while using a lowOSR and retaining still a small error. The algorithm issimulated on a theoretical circle. The algorithm sim-ulations for real images are described in Section 4.3.Suppose that the maximal allowed OSR in our systemis 9 and we track a theoretical circular “selected target”with a diameter of 56 pixels.

The simulations were performed on a large “per-ceived circle” with a diameter of 56 pixels that

Page 10: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

260 Fish, Akselrod and Yadid-Pecht

Fig. 8. COM location error vs. OSR for a theoretical circular “selected target” with a diameter of 56 pixels for (a) pointer skips = 1, (b) pointerskips = 2, (c) pointer skips = 6 (final).

occupies ∼ 2400 pixels and a maximum allowed OSRof 9.

The first step in the algorithm is the application ofthe regular WTA with the DWA to the processed image.This step results in an optimal OSR of 55 iterationswhich has an error of 0.4 pixels. Figure 8(a) shows theerror in the COM location vs. the OSR for the giventarget.

Due to the real time operation limitation we cannotuse this optimal OSR = 55. From Fig. 8(a) we observe

that the usage of OSR = 9 will cause an unacceptableerror of 23 pixels.

The proposed modification in the algorithm is toincrease the “column winner pointer” and “row win-ner pointer” skip by 1, i.e. pointer skips = 2, bothfor the rows and for the columns. Thus the numberof required steps to scan the whole target is reduced.This modified algorithm results in a reduction of theoptimal OSR from 55 to 27 and a negligible error in-crease from 0.4 to 0.41 pixels at this OSR. Figure 8(b)

Page 11: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 261

Fig. 9. COM location error as a function of a circle target diameter,for the modified algorithm.

demonstrates results of this modification with pointerskips = 2.

Next we increase the pointer again, i.e. pointerskips = 3, since the resulted OSR is still high. Thisresults in reduction of the optimal OSR to 17 and er-ror increase to 1.2 pixels. We continue increasing thepointer skip value until the accepted OSR will be lessor equal to the permitted OSR = 9. In the case of oursimulated target the required pointer skips are 6. In thiscase the OSR is 9 and the error is only 2.05 pixels. Fig-ure 8(c) demonstrates results of this modified algorithmwith pointer skips = 6.

As can be seen, the COM location error remainedwith the same sinusoidal behavior with a constant

Fig. 10. Images with real objects, (a) A circle-like object (occupying 482 pixels), (b) An ellipse-like object (occupying 160 pixels).

period, that is wrapped in an “envelope”, followingthe modification in pointer skips. The pointer skips in-crease results in the period decrease, explained due tothe reduction in the required target samples.

In order to examine the properties of the pro-posed algorithm, it was applied to circular targetswith different sizes. Figure 9 shows the error calcu-lated as a function of the target diameter. It shouldbe noted that all results are for OSRs that are lessthan or equal to the permitted OSR for real timeapplications.

As can be seen from the simulations, the proposedalgorithm allows a significant reduction in the requirednumber of iterations, while still retaining a high preci-sion. For example, Fig. 9 shows that for a circle with adiameter of 84 pixels (more than 5000 potential winnerpixels) the precision is 1.48 pixels and the OSR requiresonly 9 iterations.

4.3. Algorithm Application to Real Targets andDependence on the WTA Circuit Precision

In this sub-section the proposed algorithm applicationto two real targets is described. The first one has acircle-like form and the second one has an ellipse-like form. The influence of the WTA precision and thepermitted OSR on the final system precision is stud-ied for these targets. Figure 10 shows the simulatedimages: Fig. 10(a) shows the circle-like target of in-terest occupying 482 pixels of the sensor array andFig. 10(b) shows the ellipse-like target that occupies160 pixels.

Page 12: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

262 Fish, Akselrod and Yadid-Pecht

Fig. 11. COM location error “period” as function of the WTA pre-cision (an example for the object in Fig. 10(a)).

First, the algorithm with pointer skips = 1 for differ-ent WTA precisions (3, 4, 5, 6, 7 and 8 bits) has beenapplied to the targets in order to examine the WTAprecision influence on the creation of the “selected tar-gets”. It was shown in Section 4.2 that an error in COMlocation changes with a constant period as a function ofthe OSR for a given size of a “selected target” and thatthis period linearly increases with the “selected target”size increase. Figure 11 shows the COM location errorperiod, representing the “selected target” size, as func-tion of the WTA precision for the target in Fig. 10(a).The size of the “selected target” that was created fromthe image in Fig. 10(a) can be determined for everysimulated WTA precision.

It can be seen that there is a high influence of theWTA precision on the “selected target” size, i.e. whenthe precision increases from 3 to 5 bits. The increase ofthe WTA precision from 5 to 7 bits has a small influenceon the “selected target” size. The 8-bit precision WTAresults in a very small “selected target”. As mentionedin Section 3, the COM precision depends on the numberof target pixels that participate in the COM computationof the “selected target” size, while the “selected target”size depends on the chosen WTA precision. Simula-tions carried out on a set of various targets show thatthe preferable WTA precision is 5–6 bits. In this worka 6-bit precision WTA was used.

The proposed algorithm was applied to the tar-gets shown in Fig. 10, assuming a real time applica-tion system with a maximum allowed OSR of 6. Thesimulations result in COM computation precision of

0.49 pixels with pointer skips of 3 in the algorithm forthe target of Fig. 10(a) and an error in the final COMlocation of 0.7 pixels with skips of 3 for the target of10(b).

4.4. Hardware Requirements

One of the features of the proposed system is its sim-plicity of implementation due to minimum hardware re-quirements and simplicity in re-design for various sen-sor array sizes. Section 3, which presented the principlescheme and describes the system operation, showedthat a digital value of every “target row winner” isstored in a “Row winners” memory. This way the “tar-get frame winner” is chosen from these “target rowwinners” with assistance of the “1-D column DWA”block. This possible implementation was presented inorder to explain the concept of the proposed system.Actually, we need to store only (a) one target row andone target column pointer values, (b) a temporary tar-get winner of the current frame, (c) a target winnerof the previous frame, (d) temporary and final COMlocations and, finally, (e) the edge coordinates for thedefined active region. (In addition, a small memory isrequired for bad pixel elimination, as will be describedlater). The “target row winner” of every scanned rowof a current frame is compared to the temporary “tar-get frame winner”. If the “target row winner” is lessthan the temporary “target frame winner”, the tempo-rary winner does not change. If the “target row win-ner” is higher than the temporary “target frame win-ner”, the temporary winner is replaced by the givenrow winner. If they are equal, then the “1-D columnDWA” block takes a decision depending on the “col-umn winner pointer” value according to the DWA al-gorithm applied, as described in Section 3. At the endof a given frame, the temporary “target frame winner”replaces the “target frame winner” of a previous frameand participates in the computation of the COM loca-tion. The temporary result of the COM calculation isstored and only after a required number of OSR frames,this temporary COM result replaces the final COM lo-cation and is output. This way the required memorysize does not depend on the sensor array size. This isimportant advantage in the case of very large sensorarrays.

The required number of analog comparators in theWTA analog computation is the only element that de-pends on the array size—every column has its own

Page 13: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 263

comparator. The required precision of the compara-tors is dictated by the WTA precision. It was men-tioned that the system allows usage of a low precisionWTA, so it is possible to design simple, low area com-parators. Additionally, the comparators matching prob-lem can be neglected following the proven suitabilityof a low precision WTA for a high precision COMresult.

4.5. System Efficiency, Limitationsand Comparative Analysis

In this subsection the cases where the proposed algo-rithm is more efficient are shown, algorithm limitationsare presented and comparison to one of previous works[1] is discussed. First, the algorithm is most efficientin cases of large objects of interest with a noisy back-ground, where a large group of object pixels have asimilar brightness level and thus are selected as belong-ing to the selected object. In these cases the proposedalgorithm gives a high precision with low hardwareoverhead relative to other COM detection techniques.In cases of small objects or objects with a large objectintensity distribution the algorithm is less efficient. Inthese cases, systems that perform target selection bylocation of the absolute maximum in the entire im-age [1, 2] are more efficient, especially for real timeapplications. An additional limitation is that the maxi-mum number of the possible “scan paths” (number ofsequentially scanned frames with different row win-ners) is determined by the smallest number of “targetwinners” in any of the rows in the frame. Choosingan OSR, which is greater than the number of “scanpaths”, is not efficient. Thus the number of “scan paths”should be enlarged. This problem can be solved in asimple way by eliminating the influence of rows with asmall number of “target row winners” (<OSR) on therow DWA pointer and introducing an additional rowDWA pointer in order to choose row winners in theserows.

Compared to [1], our proposed system allows com-putation in the array periphery without affecting sen-sors quality. On other hand, the in-pixel processingstrategy that was chosen in [1] enables full parallelvery fast processing (tracking targets moving up to7000 pixels/s was reported). The system proposed in[1] is sensitive to matching problems in the analogWTA circuit, while in our currently proposed system,usage of a low resolution WTA significantly reduces

this sensitivity. As was mentioned, a general strategyof active region selection, target locking and shiftingtracking to new targets in our system is similar tothat proposed in [1], but the implementation is differ-ent. In our system, an active region selection is per-formed by simple limitations of the DWA algorithmrow and column pointers, while in [1] it is performedby inhibiting a portion of the saliency map, thus re-stricting the activity of the WTA circuit to an activeregion.

An additional feature of the proposed system is badpixels elimination. When working with large arrays ofphotoreceptors or other analog input elements, bad pix-els are a fact of life. The concept that we have chosento solve this problem is similar to the one we have de-scribed in [2]. A bad pixel can have a high value, so itcan be selected as the winner regardless of other pixelvalues. In the proposed system bad pixels are disabledwith a special “dark mode”, in which a dark image is in-put. The bad pixel elimination is realized in two stages.In the first one, the “dark mode” stage, we process adark image (very low background without object of in-terest). The circuit finds bad bright pixels (higher thanthe pre-defined threshold), using the same K-WTA cir-cuit. All bad pixels are found and their addresses arestored in the memory.

In the second stage (regular system operation), a realimage is processed while 0 V values are transmitted tothe WTA circuit instead of the “bad” pixels that werefound in the “dark mode” stage.

5. Conclusions

An approach for implementing a high precision COMdetection system via an adaptive K-WTA circuit in con-junction with a DEM algorithm for star tracking ap-plications was proposed. The system outputs a highprecision COM location of the most salient target inthe field of view for tracking and navigation purposesand is suitable for real time applications. The proposedsolution utilizes the separability property of the COM,and therefore reduces the computational complexity byutilizing 1D circuits for the computation. It was shownthat the DEM algorithm allows reducing the requiredWTA circuit precision to 5–6 bits, while still achiev-ing a high precision of output. The low precision re-quirement solves the matching problem in analog com-parators. In addition, this proposed COM system hasvery low hardware requirements even for large sensor

Page 14: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

264 Fish, Akselrod and Yadid-Pecht

arrays. The system allows target selection and lockingwith multiple targets tracking capability.

A set of MATLAB simulations on a number of theo-retical and real targets has been carried out to examinethe proposed algorithm accuracy. The results show thatthe proposed system enables achieving a high precisionCOM output even for very large theoretical and realtargets.

Future plans include full system implementationand research on applying other DEM methods in suchsystems.

Acknowledgment

The authors thank the anonymous referees for theirthorough review and useful comments.

References

1. V. Brajovic and T. Kanade, “Computational sensor for visualtracking with attention.” IEEE Journal of Solid-State Circuits,vol. 33, no. 8, Aug. 1998.

2. A. Fish, D. Turchin, and O. Yadid-Pecht, “An APS with 2-Dimensional winner-take-all selection employing adaptive spa-tial filtering and false alarm reduction.” IEEE Trans. on Elec-tron Devices, Special Issue on Image Sensors, pp. 159–165,Jan. 2003.

3. T.G. Morris, T.K. Horiuchi, and S.P. DeWeerth, “Object-basedselection within an analog VLSI visual attention system.” IEEETransactions on Circuits and Systems—II: Analog and DigitalSignal Processing, vol. 45, no. 12, Dec. 1998.

4. T. Horiuchi and E. Niebur, “Conjunction search using a 1-D,analog VLSI-based, attentional search/tracking chip.” In Proc.ARVLSI Conference, Atlanta, 1999, pp. 276–290.

5. G. Indiveri, “Neuromorphic analog VLSI sensor for visual track-ing: Circuits and application examples.” IEEE Transactions onCircuits and Systems II: Analog and Digital Signal Processing,vol. 46, no. 11, pp. 1337–1347, 1999.

6. G. Indiveri, P. Oswald, and J. Kramer, “An adaptive visualtracking sensor with a hysteretic winner-take-all network.” InProc. IEEE International Symposium on Circuits and Systems,Scottsdale Arizona, USA, May 2002.

7. C. Koch and B. Mathur, “Neuromorphic vision chips.” IEEESpectrum, vol. 33, pp. 38–46, May 1996.

8. T.G. Morris and S.P. DeWeerth, “Analog VLSI excitatory feed-back circuits for attentional shifts and tracking.” Analog In-tegrated Circuits and Signal Processing, vol. 13, pp. 79–92,May/June 1997.

9. T. Horiuchi, T. Morris, C. Koch, and S. DeWeerth, “AnalogVLSI circuits for attention-based, visual tracking.” In Advancesin Neural Information Processing Systems, M.C. Mozer, M.I.Jordan, and T. Petsche (Eds.). MIT Press: Cambridge, MA, 1997,vol. 9.

10. M. Clapp and R. Etienne-Cummings, “A dual pixel-type im-ager for imaging and motion centroid localozation.” In Proc.ISCAS’01, Sydney, Australia, May 2001.

11. T. Shibata, T. Nakai, N. Mei Yu, Y. Yamashita, M. Konda, andT. Ohmi, “Advances in Neuron-MOS applications.” In ISSCCProc., 1996, pp. 304–305.

12. S.P. DeWeerth, “Analog VLSI circuits for sensorimotor feed-back.” Ph.D thesis, Caltech, Pasadena, CA, 1991.

13. M. Tartagni and P. Perona, “Computing centroids in current-mode technique.” Electronic Letters, vol. 29, no. 21, pp. 1811–1813, 1993.

14. N. Mei Yu, T. Shibata, and T. Ohmi, “A real-time center-of-mass tracker circuit implemented by neuron MOS technology.”IEEE Transactions on Circuits and Systems—II, vol. 45, no. 4,April 1998.

15. R.C. Meitzler, K. Strohbehn, and A.G. Andreou, “A silicon retinafor 2-D position and motion computation.” In Proc. ISCAS’95,New York, USA, 1995.

16. C. Sun, G. Yang, C. Wrigley, O. Yadid-Pecht, and B. Pain,“A smart CMOS imager with on-chip high-speed windowedcentroiding capability.” In Proc. IEEE Workshop on CCDsand Advanced Image Sensors, Karuizawa, Japan, June 10–12,1999.

17. S.P. DeWeerth, “Analog VLSI circuits for stimulus localizationand centroid computation.” Int. J. of Comp. Vision, vol. 8, no. 3,pp. 191–202, 1992.

18. T. Komuro, I. Ishii, M. Ishikawa, and A. Yoshida, “A digitalvision chip specialized for high-speed target tracking.” IEEETrans. on Electron Devices, Special Issue on Image Sensors,pp. 191–199, Jan. 2003.

19. T.G. Morris and S.P. DeWeerth, “Analog VLSI excitatory feed-back circuits for attentional shifts and tracking.” Analog In-tegrated Circuits and Signal Processing, vol. 13, pp. 79–92,May/June 1997.

20. J. Lazzaro, S. Ryckebusch, M.A. Mahowald, and C.A.Mead, “Winner-tale-all networks of O(n) complexity.” In D.S.Touretzky (Ed.), Morgan Kaumann: San Mateo, CA, 1989,vol. 1, pp. 703–711.

21. J.A. Startzyk and X. Fang, “CMOS Current mode winner-take-all circuit with both excitatory and inhibitory feedback.” Elec-tronics Letters, vol. 29, no. 10, 13th May 1993.

22. M.A.C. Maher, S.P. Deweerth, M.A. Mahowald, and C.A. Mead,“Implementing neural architectures using analog VLSI cir-cuits.” IEEE Trans. Circuits Syst., vol. CAS-36, pp. 643–652,1989.

23. O. Yadid-Pecht, E.R. Fossum, and C. Mead, “APS image sensorswith a winner-take-all (WTA) mode of operation.” JPL/CaltechNew Technology Report, NPO 20212.

24. Z. Kalayjian, J. Waskiewicz, D. Yochelson, and A.G. Andreou,“Asynchronous sampling of 2D arrays using winner-takes-allarbitration.” In IEEE, ISCAS 96, NY, USA, vol. 3, pp. 393–396,1996.

25. R. Perfetti, “On the robust design of k-winners-take-all net-works.” IEEE Transactions on Circuits and Systems-II: Analogand-Digital-Signal-Processing, Jan. 1995.

26. V.D. Plassche, “Precision current-source arrangement.” Patentno. #US3982172, U.S. Philips Corporation, Sept. 21, 1976.

27. V.D. Plassche, “Source dynamic element matching forhigh-accuracy monolithic D/A converters.” IEEE Journal

Page 15: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit 265

of Solid-State Circuits, vol. SC-11, no. 6, pp. 795–800,1976.

28. H.S. Jackson, “Circuit and method for canceling nonlinearityerror associated with component value mismatches in a dataconverter.” U.S. Patent 5 221 926, June 22, 1993.

29. D. Akselrod, A. Fish, and O. Yadid-Pecht, “A mixed signal en-hanced WTA tracking system via 2-D dynamic element match-ing.” In Proc. ISCAS’02, Arizona, USA, May 2002.

30. J.C. Candy, “A use of double integration in sigma-delta mod-ulation.” IEEE Trans. Commun., vol. 33, no. 3, pp. 249–258,1985.

31. L.R. Carley, “A noise-shaping coder topology for 15+ bit con-verters.” IEEE J. Solid-State Circuits, vol. SC-24, pp. 267–273,April 1989.

32. L.R. Carley and J. Kenney, “A 16-bit 4’th order noise-shapingD/A converter.” In Proceedings of the IEEE 1988 CustomIntegrated Circuits Conference, Rochester, NY, May 1988,pp. 21.7.1–21.7.4.

33. Y. Sakina, “Multi-bit SD analog-to-digital converters with non-linearity correction using dynamic barrel shifting.” Electron-ics Research Laboratory, College of Engineering, University ofCalifornia, Berkley CA, Memorandum No. UCB/ERL M93/63,1993.

34. B.H. Leung and S. Sutarja, “Multibit sigma—Delta A/D con-verter incorporating a novel class of dynamic element matchingtechniques.” IEEE Trans. Circuits Syst. II, vol. 39, p. 35–51,Jan. 1992.

35. R.W. Adams and T.W. Kwan, “Data-directed scrambler formulti-bit noise shaping D/A converters.” U.S. Patent 5404142,April 4, 1995.

Alexander Fish was born in Kharkov, Ukraine, in1976. He received the B.Sc. degree in Electrical Engi-neering from Technion, Israel Institute of Technology,Haifa, Israel, in 1999, and an M.Sc. degree in ElectricalEngineering from Ben-Gurion University, Beer-Sheva,Israel, in 2002. He is currently working on his Ph.D.degree in Electro-Optics Engineering at Ben-GurionUniversity.

He has been a Teaching and Research Assistant at theVLSI Systems Center, Ben-Gurion University, since1999. His research interests include CMOS active pixelsensor design, image processing, pattern recognitionand low-power design techniques for digital and analogcircuits.

Dmitry Akselrod was born in Lvov, Ukraine, in1975. He received the B.Sc. degree in electrical en-gineering from the Technion, Israel Institute of Tech-nology, Haifa, Israel, in 1999, and the M.Sc. degreein electrical engineering from Ben-Gurion University,Beer-Sheva, Israel, in 2002.

He has been a Research Assistant at the VLSI Sys-tems Center, Ben-Gurion University, since 2000. Hisresearch interests include Delta-Sigma converters ar-chitecture, Dynamic Element Matching techniques andimage processing. He has been with Motorola Semi-conductor, Herzlia, Israel, since 1998 where he iscurrently a Senior Staff Engineer working on third-generation wireless systems.

Orly Yadid-Pecht received her B.Sc. from the De-partment of Electrical Engineering at the Technion In-stitute of Technology (Israel) in 1983. She completedher M.Sc. in 1990 and her D.Sc. in 1995, respectively,also at the Technion. She was a research associateof the National Research Council (USA) from 1995–1997 in the areas of Advanced Image Sensors at theJet Propulsion Laboratory (JPL)/California Institute ofTechnology (Caltech). In 1997 she joined the Ben-Gurion University in Israel, as a faculty member in theDepartments of Electrical and Electro-Optical Engi-neering. There she founded the VLSI Systems Center,specializing in CMOS Image Sensors.

Her areas of interest are integrated CMOS im-age sensors, smart sensors, image processing, neural

Page 16: High Precision Image Centroid Computation via an Adaptive K-Winner-Take-all Circuit in Conjunction with a Dynamic Element Matching Algorithm for Star Tracking Applications

266 Fish, Akselrod and Yadid-Pecht

networks and pattern recognition algorithms imple-mentation. She has published dozens of papers andpatents and has lead over 10 research projects sup-ported by government and industry. Dr. Yadid-Pechtwas an Associate Editor of the IEEE Transactions onVLSI and is currently the Deputy Editor-in-Chief forthe IEEE Transactions on CAS-I. She was also the chairof the IEEE CAS sensors technical committee and a

member of the neural networks and analog technicalcommittees, a member of the technical committee forthe IEEE bi-annual workshop on CCDs and AdvancedImage Sensors, and a member of the SPIE Solid StateSensor Arrays program committee. Currently, she isalso the general chair of the IEEE International Con-ference on Electronic Circuits and Systems (ICECS)that will be held in Israel in 2004.