01577115

6
A Real-Time Multiple-V ehicle Detection and T racking System with Prior Occlusion Detection and Resolution Bing-Fei Wu, Shin-Ping Lin, Yuan-Hsin Chen, Department of Electrical and Control Engineering, National Chiao Tung University, 1001 Ta-Hsueh Road, Hsinchu 30050, Taiwan Email: [email protected]  Abstract – The proposed multiple-vehicle detection and tracking (MVDT) system utilizes a color background to segment moving objects and exploits relations among the moving objects and existed trajectories to track vehicles. Initially, the background is extracted by classification. Then, it is regularly updated by previous moving objects to guarantee robust segmentation in luminance-change circumstance. For partial wrong converged background due to roadside parking vehicles, it will be corrected later by checking fed back trajectories to avoid false detection after the vehicles moving away. In tracking processing, the relations of distances or distances and angles are applied to determine whether to create, extend, and delete a trajectory. If occlusion detected after trajectory creation, it will be resolved by rule-based tracking reasoning. Otherwise, lane information will be used. Finally, traffic parameter calculations based on the trajectories are listed. Moreover, for easy setup,  parameter automation for the system is proposed.  Keywords - Detection, segmentation, tracking, occlusion, rule- based reasoning, traffic parameter. I. INTRODUCTION Visual sensing system has many applications in intelligent transportation system (ITS). In comparison to traditional sensing system, it is easier to setup, lower cost, and versatile. However, it spends more processing time than traditional ones. Therefore, a real-time multiple-vehicle detection and tracking (MVDT) system is proposed to reduce the processing time. Generally, a MVDT system is constituted by vehicle detection and tracking processing. In the following two paragraphs, we will review previous works related to the two processing. In vehicle detection processing, virtual slit [1] and virtual loop [2] exploits the concept of inductive loop [3] to detect vehicle passing by monitoring illumination change in pre-specified regions of a frame. As the kind of processing checks the pre-specified regions of frame only, its processing speed is fast. However, it is hard to setup, expensive, and functional limited. Another alternative uses double-difference operator [4] with gradient magnitude to detect vehicles. Although the kind of processing is more complicated than previous one, it can gather more vehicle information. However, it is hard to adapt the luminance changes due to daylight, weather, or automatic electric shutter (AES). Consequently, optical flow based techniques which estimate the intensity motion between two subsequent frames is used to overcome the change of luminance [5], [6]. However, it requires much time to find out an optimal solution. Hence, Smith et al. [7], Brandt et al. [8] and Papanikolopoulos et al. [9] that dynamically update an estimated background to detect moving objects can adapt the luminance changes in real-time. However, the techniques depend on an initial background without vehicle inside. For optical flow based tracking technique, maximum a posterior (MAP) is used to prove or disprove a given hypothesis (trajectory) based on Bayesian inference. For instance, Kamijo et al. [1] and Tao et al. [6] use MAP to track occluded vehicles by spatio-temporal Markov random field (ST-MRF) and dynamic layer shape, motion, and appearance model respectively. However, MAP needs much computation power. For complexity reduction, Li et al. [5] and Smith et al. [7] use sequential importance sampling (SIS) belonged to a class of Monte Carlo method and Sum-of-Squared Differences (SSD) with dynamic pyramiding respectively. Even with the improvements, MAP can only satisfy near real- time issue. In comparison to MAP, the processing speed of extended Kalman filter (EKF) based techniques are faster [2], [10]. The technique estimates positions and velocities (states) of vehicles represented in dynamic models. Although the technique is robust, it will converge to wrong states if vehicles are occluded. For such a reason, rule-based reasoning is used to reduce the processing time and to overcome the occlusion problem. However, the method used by [4], [8] spends much time to detect occlusion. In this study, dynamic segmentation and rule-based tracking reasoning are proposed for vehicle detection and tracking processing respectively. The two techniques take the processing speed, preciseness, and robustness into consideration. The details are described in the following two sections. II. SYSTEM OVERVIEW The proposed MVDT system consists of dynamic segmentation and rule-based tracking reasoning. First, the dynamic segmentation uses current video frame, previous moving objects and previous trajectories to segment current 311 0-7803-9314-7/05/$20.00©2005 IEEE 2005 IEEE International Symposium on Signal Processing and Information Technology

Transcript of 01577115

Page 1: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 1/6

A Real-Time Multiple-Vehicle Detection and Tracking System with Prior OcclusionDetection and Resolution

Bing-Fei Wu, Shin-Ping Lin, Yuan-Hsin Chen,

Department of Electrical and Control Engineering, National Chiao Tung University,1001 Ta-Hsueh Road, Hsinchu 30050, Taiwan

Email: [email protected]

Abstract – The proposed multiple-vehicle detection and tracking(MVDT) system utilizes a color background to segment movingobjects and exploits relations among the moving objects and existed trajectories to track vehicles. Initially, the background is extracted by classification. Then, it is regularly updated by previous movingobjects to guarantee robust segmentation in luminance-changecircumstance. For partial wrong converged background due toroadside parking vehicles, it will be corrected later by checking fed back trajectories to avoid false detection after the vehicles movingaway. In tracking processing, the relations of distances or distancesand angles are applied to determine whether to create, extend, and delete a trajectory. If occlusion detected after trajectory creation, it will be resolved by rule-based tracking reasoning. Otherwise, laneinformation will be used. Finally, traffic parameter calculationsbased on the trajectories are listed. Moreover, for easy setup,

parameter automation for the system is proposed.

Keywords - Detection, segmentation, tracking, occlusion, rule-based reasoning, traffic parameter.

I. INTRODUCTION

Visual sensing system has many applications inintelligent transportation system (ITS). In comparison totraditional sensing system, it is easier to setup, lower cost,and versatile. However, it spends more processing time thantraditional ones. Therefore, a real-time multiple-vehicledetection and tracking (MVDT) system is proposed to reducethe processing time. Generally, a MVDT system isconstituted by vehicle detection and tracking processing. Inthe following two paragraphs, we will review previous worksrelated to the two processing.

In vehicle detection processing, virtual slit [1] andvirtual loop [2] exploits the concept of inductive loop [3] todetect vehicle passing by monitoring illumination change inpre-specified regions of a frame. As the kind of processingchecks the pre-specified regions of frame only, its processingspeed is fast. However, it is hard to setup, expensive, andfunctional limited. Another alternative uses double-differenceoperator [4] with gradient magnitude to detect vehicles.Although the kind of processing is more complicated thanprevious one, it can gather more vehicle information.However, it is hard to adapt the luminance changes due todaylight, weather, or automatic electric shutter (AES).Consequently, optical flow based techniques which estimate

the intensity motion between two subsequent frames is usedto overcome the change of luminance [5], [6]. However, itrequires much time to find out an optimal solution. Hence,Smith et al. [7], Brandt et al. [8] and Papanikolopoulos et al.[9] that dynamically update an estimated background todetect moving objects can adapt the luminance changes inreal-time. However, the techniques depend on an initialbackground without vehicle inside.

For optical flow based tracking technique, maximum aposterior (MAP) is used to prove or disprove a givenhypothesis (trajectory) based on Bayesian inference. Forinstance, Kamijo et al. [1] and Tao et al. [6] use MAP to track occluded vehicles by spatio-temporal Markov random field(ST-MRF) and dynamic layer shape, motion, and appearancemodel respectively. However, MAP needs much computationpower. For complexity reduction, Li et al. [5] and Smith et al.[7] use sequential importance sampling (SIS) belonged to aclass of Monte Carlo method and Sum-of-SquaredDifferences (SSD) with dynamic pyramiding respectively.Even with the improvements, MAP can only satisfy near real-time issue. In comparison to MAP, the processing speed of extended Kalman filter (EKF) based techniques are faster [2],[10]. The technique estimates positions and velocities (states)of vehicles represented in dynamic models. Although thetechnique is robust, it will converge to wrong states if vehicles are occluded. For such a reason, rule-basedreasoning is used to reduce the processing time and toovercome the occlusion problem. However, the method usedby [4], [8] spends much time to detect occlusion.

In this study, dynamic segmentation and rule-basedtracking reasoning are proposed for vehicle detection andtracking processing respectively. The two techniques take theprocessing speed, preciseness, and robustness intoconsideration. The details are described in the following twosections.

II. SYSTEM OVERVIEW

The proposed MVDT system consists of dynamicsegmentation and rule-based tracking reasoning. First, thedynamic segmentation uses current video frame, previousmoving objects and previous trajectories to segment current

3110-7803-9314-7/05/$20.00©2005 IEEE

2005 IEEE InternationalSymposium on Signal Processingand Information Technology

Page 2: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 2/6

moving objects. Then, the rule-based tracking reasoningutilizes the current moving objects with previous trajectoriesto find current trajectories. The block diagram of the

proposed system is shown in Fig. 1.

D y n a m i c

segmen ta t ion

Rule -based t r ack ing

reason ing

De lay

De lay

Prev ious

trajector ies

Frames

Prev ious

moving ob jec t s

M o v i n g

objects

Trajector ies

Fig. 1. The block diagram of the proposed MVDT system

III. DYNAMIC SEGMENTATION AND RULE-BASEDTRACKING REASONING

For real-time issue, the dynamic segmentation reducesvehicle detection problem to subtraction between the current

frame and the statistically maintained color background, andthe rule-based tracking reasoning simplifies vehicle trackingproblem to relate centers of segmented moving objects tocenters of the last trajectory nodes. For preciseness issue, therule-based tracking reasoning uses spatial and spatio-temporal filter to eliminate false detected and vibratedmoving objects. For robustness issue, the dynamicsegmentation exploits background compensation to maintaina color background, and the rule-based tracking reasoningconsiders prior occlusions, and mis-detected objects.

A. Color Background Extraction

The concept of color background extraction is to exploitthe appearance probability (AP) of each pixel’s color toextract background. That is, for sufficient long time, a colorwith the maximum AP is most probable belonged to thebackground. However, to get the AP for each pixel’s colorrequires lots of memories and has to de-noise. Hence, the APof each pixel’s color class is utilized instead.

A color class located at coordinate ( x, y) has an orderednumber c to unique identify it. To calculate AP and toclassify pixel’s color, a counter, CC ( x, y, c), and a color mean,CM ( x, y, c) = [ CM R( x, y, c), CM G( x, y, c), CM B( x, y, c)]T, are

created. The total number of classes located at the samecoordinate is denoted as NC ( x, y).

Initially, there is only one class for each pixel. As pixelof a frame located at ( x, y) and sampled at time instance t willbe denoted as f( x, y, t ) = [ f R( x, y, c), f G( x, y, c), f B( x, y, c)]T, 0-th color mean, counter, and number of classes are initializedas CM ( x, y, 0) = f ( x, y, 0), CC ( x, y, 0) = 1, and NC ( x, y) = 1respectively. In addition, a color background, BG ( x, y) =[ BG R( x, y), BG G( x, y), BG B( x, y)]T, is set as [-1, -1, -1] T toindicate that all pixels are not converged yet.

Then, a decision function utilizes the sum of absolutedifferences (SAD), SAD( x, y, c), shown in Eq.(1) todetermine whether to classify current pixel color to the c-thclass or to create a new class.

),,(),,(

),,(),,(

),,(),,(),,(

c y xCM t y x f

c y xCM t y x f

c y xCM t y x f c y xSAD

B B

GG

R R

-

+

+=

(1)

First, the decision function classifies the pixel to a class j by Eq.(2). Then the corresponding SAD SAD( x, y, j) iscompared with a fix threshold TH1 . If the SAD( x, y, j) is lessthan the TH1 , the CM ( x, y, j) and CC ( x, y, j) will be updatedaccording to Eq.(3). Otherwise, a new class is createdaccording to Eq.(4).

),,(minarg),(0

c y xSAD j y x NC c<

= (2)

+=

+

+=

1),,(),,(

1),,(),,(),,(),,(

),,(

j y xCC j y xCC

j y xCC t y x j y x j y xCC

j y xf CM

CM (3)

+=

=

=

1),(),(

1)),(,,(

),,()),(,,(

y x NC y x NC

y x NC y xCC

t y x y x NC y x f CM(4)

As time goes by, the class counter that belongs tobackground will be increased rapidly. The AP: AP ( x, y, c) of each class is defined as Eq.(5). The k -th class that is mostprobable to be classified to background is decided by Eq.(6).Then, the CM ( x, y, k ) should be rounded to background BG ( x, y) or not depends on whether the AP of the classgreater than a dynamic threshold TH2 or not.

1),,(

),,(

,,),,( 1),(

0

+==

∑-

=

t c y xCC

c y xCC

c) yCC(xc y x AP y x NC

c

(5)

312

Page 3: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 3/6

),,(maxarg),(0

c y xCC k y x NC c<

= (6)

B. Moving Objects Segmentation

With the extracted background, we can detect themoving objects by check the sum-of-difference betweenbackground and the input frame. The sum-of-difference foreach pixel is defined as Eq.(7). A binary mask of movingobjects, MM ( x, y), or the complement of a backgroundmask, ),( y x BM , are obtained by Eq.(8). In the equation,

MTH L and MTH H , are dynamic threshold described in thenext paragraph. The moving object mask is exploited forvehicle tracking processing to track trajectories. Thebackground mask is used to select regions of background thatneeded to be updated with a predefined n by Eq.(9). If the n is too large, the background will be hard to adapt slowillumination change. However, if the n is small, thebackground will be easily affected by moving objects. Forsuch a reason, the selected of n is important. In ourexperience, n = 8 can satisfy the two issues mentioned above.

),(),,(

),(),,(

),(),,(),(

y x BGt y x f

y x BGt y x f

y x BGt y x f y x MSD

B B

GG

R R

-

+

+=

(7)

>

<

==

otherwise0 ,),(

and ),(1),(),(

H

L

MTH y x MSD

MTH y x MSD y x BM y x MM (8)

nt y x y xn

y x y x BM ),,(),()1(

),(,1),(If f BG

BG+

== (9)

As the background updated at each time instance, thedynamic segmentation proposed can overcome the slowillumination change, such as the change of daylight orweather, with fixed threshold. However, for rapidillumination change, such as the effect of AES, a fixedthreshold will cause false detection. Therefore, an adaptivethresholding method is proposed to find the low-valley VL =

[VL R, VLG, VL B,]T

and high-valley VH = [VH R, VH G, VH B,]T

of filtered difference distribution FD (n) = [ FD R(n), FD G(n), FD B(n)]T between the background and the subsequent frame.The filtered difference distribution is the output of differencedistribution D(n) = [ D R(n), DG(n), D B(n)]T shown in Eq.(10)smoothed by Eq.(11). The reason for not using the D(n) tofind the valleys directly is due to that D(n) is noisy. The noisewill affect the Laplacian operator in Eq.(12) to find thecorrect valleys. The concept to find valleys for FD (n) isbased on the observation: no matter a frame affected by AESor not, most frequent appeared differences in D(n) are mostpossibly belongs to background. With the valleys, we then

can obtain the dynamic threshold MTH L and MTH H mentioned before by Eq.(13).

BG RC n Dn y x BGt y x f

C C C

,,where,1)(),(),,(

== ∑ = (10)

tapsof numbertheis12where

,12

)(

)(

+

+=

∑+

-=

p

p

n

n

pn

pni

D

FD (11)

{ } ( ) BG RC VH VL

n

nnnn

C C ,,withwhere

,0)(arg,

)1()(2)1()(2

2

=<

==

-++=

FDVHVL

FDFDFDFD(12)

BG R H

BG R L

VH VH VH MTH

VLVLVL MTH ++=

++=(13)

C. Background Compensation

For partial wrong converged background due toroadside parking vehicles, trajectories are fed back fromvehicle tracking processing to decide whether the movingobjects are false detected of not. If the moving objects arefalse detected, the following three situations will occur:1. The centers of moving objects do not change too much fora period of time;

2. The centers of the first trajectory nodes are not near to theboundary of a detection zone;

3. There are no edges near the contour of moving objects.

If any trajectory of moving objects satisfies the threesituations, the regions of moving objects will be set asbackground.

D. Filter Out False Detected Objects

In general, some false detected objects can beeliminated by the spatial properties obtained after connectedcomponent labeling. The spatial properties used top-mostcoordinate T (l), left-most coordinate L(l), bottom-mostcoordinate B(l), right-most coordinate R(l), area A(l), width W (l), height H (l), aspect ratio AR(l), size S(l), and density

D (l). Among the spatial properties, the first 5 properties canbe obtained during connected-component labeling. Theothers are derived from the 5 properties. The spatialproperties are then used to filter out some false detectedobjects by using thresholding method. The method utilizestwo statistic moments: mean and variance of vehicle’s spatialproperties as the references of threshold assignment. The

313

Page 4: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 4/6

thresholding operator shown in in Eq.(14) is used to filter outfalse detected objects by width of bounding box. The WM(l , t )and WV (l, t ) are mean and variance of the width of all

moving objects found until now. For other spatial properties,the operators utilized are similar.

·

£

otherwisenothingdo,),(2

),()(componentth-theEliminate

t lWV

t lWM lW l (14)

E. Prior Splitting by Lane Information

If vehicles are occluded when they just entrance to theframe, the tracking processing might be confused.Fortunately, most vehicles are occluded side by side

horizontally across adjacent lanes. Consequently, a priorocclusion detection and resolution with the help of laneinformation is proposed. First, a lane mask H ( x, y) withvalues: -1 (ignored), 0 (separated), 1 (first lane), and so on ismade as shown in Fig. 2(b). As each moving object isspecified a label ID l with a label ID image g( x, y) afterconnected-component labeling. Next, each moving objectsare checked by Eq.(15) to obtain a histogram S(l,h) withrespect to lane ID h. Then, the occlusion detection is judgedby Eq.(16) with a reasonable threshold TH3 = 5. Finally, theocclusion is resolved by split the moving objects referring to

H ( x, y).

1)),(,()),(,(

then,0),(and1),(and),(+=

„=

y x H lS y x H lS

y x H y x H l y xg(15)

resolutionocclusiondo

then,3)1,(),(If TH hl-ShlS <+ (16)

(a) (b)Fig. 2. (a) A background obtained after color background extraction with adetection zone bounded by a magenta rectangle; (b)Visual representation of lane information base on the background shown in (a). Different gray-levelcolor regions indicate different lanes. The white color regions are seperationsbetween lanes. The black color regions are ignored regions.

F. Update Trajectories and Eliminate Vibrated MovingObjects

In order to reduce the computation power, the centers of trajectories are used to relate current moving objects to

current existed trajectory. The relation used to check whethera moving object should related to an existed trajectory or tocreate a new trajectory is the distance between the centers of

current moving object and the last trajectory node (thetrajectory node of the k -th trajectory at time instance t isdenoted as l(k , t )). If the number of nodes in the existedtrajectory is greater than 1, the angle AC (l, k , t ) of a vectorA(l, k , t ) from the center of the last node C(l(k , t -1)) to thecenter of moving object C(l) and another vector B(l, k , t )from the center of the second last node C(l(k , t -2)) to thecenter of the last node is checked, too. If AC (l, k , t ) > 0, the l–th moving object will satisfy the angle constraint (

o

60 ) withthe k -th trajectory at time instance t .

o

60cos),,(),,(),,(),,(),,(

))2,(())1,((),,(

))1,(()(),,(

t k lt k lt k lt k lt k l AC

t k lt k lt k l

t k llt k l

BABA

CCB

CCA

-=

-=

-=

(17)

If a moving object satisfies the distance constraint butdoes not satisfy the angle constraint, a vibrated counterassociated to the trajectory will be increased by 1. If thevibrated counter of the trajectory is greater than 3, thetrajectory will be thought as vibrated moving objects. That is,the trajectory will be ignored.

G. Resolves Multiple-Vehicle Occlusions

In case the prior occlusion detection and resolutionstated in sub-section III.E fails, a post occlusion detectionand resolution technique based on the trajectories is used.The following steps are used to check whether a movingobject is formed by occluded vehicles or not.1. If a moving object can not relate to any existedtrajectories, go to step 2. Otherwise, add the moving object tothe trajectory.

2. If the region of the moving object plus an offsetestimated by the centers of the last two trajectory nodes is asuperset of the trajectory last node, go to step 3. Otherwise,create a new trajectory for the moving object.

3. Split the moving object to two moving objects.

H. Calculate Traffic Parameters

In general, most traffic parameters can be derived by thetracking trajectories. However, each trajectory has to classifyto a lane ID before we calculate traffic parameters. Themethod used to classify a trajectory to a lane ID is specifiedin Eq.(17). In Eq.(17), the way to obtain S(l,h ) is the same asEq.(15) where l is extended to l(k , t -1) to indicate the k -thtrajectory at time instance ( t -1).

314

Page 5: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 5/6

)),1,((maxarg),(* ht k lSt k hh

-= (17)

Equations used to calculate traffic parameters are listedin Table 1. Note that the moment we calculate the trafficparameters is at the moment we delete the trajectory. Hence,the last node of the just deleted trajectory is at time instancet -1.

Table 1. Eqautions used to calculate traffic parameters.

Traffic Parameters Equations

Speed:VS(h*)

)(005.0

)1,(87

)),((81

)),((

))1,(,()1,(

**

k W FPH t k N

C C

t k hVSt k hVS

t k N t k lt k l t ·

·

-

·

+=

-

where FPH is the number of frame per hour; N t (k ,t -1) is the number of nodes in the k -th trajectory attime instance t – 1; C l is the center of the l-thmoving object; )(k W f is the average width of

nodes in the k -th trajectory.Quantity:VQ(h*)

If N (k , t -1) > 3, thenVQ(h*(k , t ))=VQ(h*(k , t ))+1

Headway:VH (h*)

First, initialize t H (h) = 0 for all lane ID h.

)),(()),((

)),(( **

* t k hVSFPH

t k ht t t k h HH H ·

-=

t H (h*(k , t )) = t

Volume:VV (h*) t

FPH t k hVQt k hVV

·=

)),(()),((

**

Occupancy:VO

If N t (k , t -1) > 3, then1= OF OF

%100=

t OF VO

I. Parameter Automation

In order to adapt the MVDT system to different captureview conditions, all parameters used in the system have to bedecided automatically. In this work, the average of weightedsmall vehicle width or height mean is referred to tune thesystem parameters. Before system start-up, the car height andwidth statistics are gathered and the statistic results are stored.The mean of the vehicle width or height is thought as a

separation between small vehicles and large vehicles. Then,the small vehicle width or height mean is obtained bycalculate the average of small vehicle widths or heights (from0 to the separation). If we want to update the small vehiclewidth mean, all vehicles with their widths less than the meanwidth will be taken into average.

IV. EXPERIMENTAL RESULTS

In the experimental results, two types of imagesequences (as shown in Fig. 3 and Fig. 4) captured in theNational Highway No.1 99km are tested. Each sequencecontains 10500 frames. The size of each image is 320×240

and the frame rate of the sequence is 30 fps. The averageprocessing time is 148ms per frame. The proposed system isdeveloped on Windows XP platform with a Pentium-4 2.8

GHz CPU, 512M RAM.

Fig. 3 and Fig. 4 are two examples of occlusionresolution. In Fig. 3, the two occluded vehicles on the rightside are split by lane mask. In Fig. 4, the two occludedvehicles are split based on trajectories. Fig. 5 (a)~(c) are threetraffic parameters recorded at different time instance. Inaddition, the accuracy rates of the traffic parameters are listedin Table 2. Accurately, the parameters listed are the totalvehicle quantity, the total small vehicle quantity, and the totallarge vehicle quantity for all lanes at time instance 1000,2000, and 3000.

Fig. 3. An example of prior occlusion resolution

Fig. 4. An example of post occlusion resolution

Traffic parameters

VS(1) 98.4km/hr

VQ(1) 12

VQS(1) 12

VQL(1) 0

VH (1) 74m

VV (1) 1383/hr

VO 85.0%

(a)

315

Page 6: 01577115

8/4/2019 01577115

http://slidepdf.com/reader/full/01577115 6/6

Traffic parameters

VS(1) 98.2km/hr

VQ(1) 34

VQS(1) 29

VQL(1) 5

VH (1) 48m

VV (1) 1896/hr

VO 90.2%

(b)

Traffic parameters

VS(1) 105.0km/hr

VQ(1) 49

VQS(1) 41

VQL(1) 8VH (1) 54m

VV (1) 1802/hr

VO 89.9%

(c)Fig. 5. Traffic parameters of the first lane (the left-most lane); (a) the 1000-th

frame; (b) the 2000-th frame ; (c) the 3000-th frame

Table 2. The accuracy rate of quantities total, small, large vehicles

Traffic Parameters Accuracy RateTotal quantity 96.6%Total quantity of small vehicles 98.2%Total quantity of large vehicles 95.0%

V. CONCLUSIONS

In this work, a MVDT system with parameterautomation, vehicle detection, prior splitting by lane inform-ation, vehicle tracking, post splitting and comprehensivetraffic parameter calculation are proposed. In the beginning, aspatio-temporal statistics based color background extractiontechnique with luminance adapting and wrong convergencecompensation is utilized to segment moving objects robustly.Next, prior splitting by lane information is exploited toresolve occluded vehicles when the vehicles just entrance the

detection zone. However, some vehicles might be occludeddue to lane change in the middle of the detection zone. Hence,after tracking vehicles by a distance or distance and anglebased relation, a post splitting technique is applied. Finally,traffic parameters based on tracking trajectories arecalculated to benefit traffic monitoring. In the experimentalresult, the data tells that the processing speed of the proposedsystem achieves the real-time issue with high accuracy. Inaddition, the proposed system can be setup without givingany environment information in advance except the lanemask. For such reason, in the further, a method automaticallydetects lanes based on the background extracted will bestudied.

ACKNOWLEDGMENT

This work was supported by Nation Science Council,Taiwan under Grant no. NSC94-2213-E-009-062

REFERENCE

[1] Shunsuke Kamijo, Yasuyuki Matsushita, Katsushi Ikeuchi, MasaoSakauchi, “Traffic monitoring and accident detection at intersections”, IEEETransactions on Intelligent Transportation Systems, vol. 1, no. 2, pp. 108-118,Jun. 2000.

[2] Andrew H. S. Lai and Nelson H. C. Yung, “Vehicle-type identificationthrough automated virtual loop assignment and block-based direction-biasedmotion estimation”, IEEE Transactions on Intelligent Transportation Systems,vol. 1, no. 2, pp. 86-97, Jun. 2000.

[3] Dae-Woon Lim, Sung-Hoon Choi, Joon-Suk Jun, “Automated

detection of all kinds of violations at a street intersection using real timeindividual vehicle tracking”, IEEE International Conference on ImageAnalysis and Interpretation, pp. 126-129, Apr. 2002.

[4] Rita Cucchiara, Massimo Piccardi, and Paola Mello, “Image analysisand rule-based reasoning for a traffic monitoring system”, IEEE Transactionson Intelligent Transportation Systems, vol. 1, no. 2, pp. 119-130, Jun. 2000.

[5] Baoxin Li, Rama Chellappa, “A generic approach to simultaneoustracking and verification in video”, IEEE Transactions on Image Processing,vol. 11, no. 5, pp. 530-544, May 2002.

[6] Hai Tao, Harpreet S. Sawhney, and Rakesh Kumar, “Object trackingwith Bayesian estimation of dynamic layer representations”, IEEETransactions on Pattern Analysis and Machine Intelligence, vol. 24, no. 1, pp.75-89, Jan. 2002.

[7] Christopher E. Smith, Scott A. Brandt, and Nikolaos P.

Papanikolopoulos, “Visual tracking for intelligent vehicle-highway systems”,IEEE Transactions on Vehicular Technology, vol. 45, no. 4, pp. 744-759,Nov. 1996.

[8] Surendra Gupte, Osama Masoud, Robert F. K. Martin, and Nikolaos P.Papanikolopoulos, “Detection and classification of vehicles”, IEEETransactions on Intelligent Transportation Systems, vol. 3, no. 1, pp. 37-47,Mar. 2002.

[9] Dieter Koller, Joseph Weber, and Jitendra Malik, “Robust multiple cartracking with occlusion reasoning”, Third European Conference onComputer Vision, pp. 186-196, Springer-Verlag, 1997.

[10] Thomas Bücher, Cristobal Curio, Johann Edelbrunner, Christian Igel,David Kastrup, Iris Leefken, Gesa Lorenz, Axel Steinhage, Werner vonSeelen, “Image processing and behavior planning for intelligent vehicles”,IEEE Transactions on Industrial Electronics, vol. 50, no. 1, pp. 62-75, Feb.

2003.

316