Improving Truck and Speed Data Using Paired...

127
Research Report Agreement T2695 Task 61 Single Loop Video Data IMPROVING TRUCK AND SPEED DATA USING PAIRED VIDEO AND SINGLE-LOOP SENSORS by Yinhai Wang Nancy L. Nihan Ryan P. Avery Guohui Zhang Assistant Professor Professor/Director Transportation Northwest Graduate Research Assistant Graduate Research Assistant Department of Civil and Environmental Engineering University of Washington Seattle, Washington 98195-2700 Washington State Transportation Center (TRAC) University of Washington, Box 354802 1107 NE 45th Street, Suite 535 Seattle, Washington 98105-4631 Washington State Department of Transportation Technical Monitor Ted Trepanier, State Traffic Engineer Sponsored by Washington State Transportation Commission Washington State Department of Transportation Olympia, Washington 98504-7370 Transportation Northwest (TransNow) University of Washington 135 More Hall, Box 352700 Seattle, Washington 98195-2700 and in cooperation with U.S. Department of Transportation Federal Highway Administration December 2006

Transcript of Improving Truck and Speed Data Using Paired...

Page 1: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

Research Report Agreement T2695 Task 61 Single Loop Video Data

IMPROVING TRUCK AND SPEED DATA USING PAIRED VIDEO AND SINGLE-LOOP SENSORS

by

Yinhai Wang Nancy L. Nihan Ryan P. Avery Guohui Zhang Assistant Professor Professor/Director

Transportation NorthwestGraduate Research

Assistant Graduate Research

Assistant

Department of Civil and Environmental Engineering University of Washington

Seattle, Washington 98195-2700

Washington State Transportation Center (TRAC) University of Washington, Box 354802

1107 NE 45th Street, Suite 535 Seattle, Washington 98105-4631

Washington State Department of Transportation Technical Monitor

Ted Trepanier, State Traffic Engineer

Sponsored by

Washington State Transportation Commission Washington State Department of Transportation

Olympia, Washington 98504-7370

Transportation Northwest (TransNow) University of Washington

135 More Hall, Box 352700 Seattle, Washington 98195-2700

and in cooperation with U.S. Department of Transportation

Federal Highway Administration

December 2006

Page 2: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

TECHNICAL REPORT STANDARD TITLE PAGE WA-RD 656.1 2. GOVERNMENT ACCESSION NO. 3. RECIPIENT’S CATALOG NO.

5. REPORT DATE

December 2006 4. TITLE AND SUBTITLE

Improving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING ORGANIZATION CODE 7. AUTHOR(S)

Yinhai Wang, Nancy Nihan, Ryan Avery, and Guohui Zhang 8. PERFORMING ORGANIZATION REPORT NO.

TNW

10. WORK UNIT NO. 9. PERFORMING ORGANIZATION NAME AND ADDRESS

Washington State Transportation Center (TRAC) University of Washington, Box 354802 University District Building; 1107 NE 45th Street, Suite 535 Seattle, Washington 98105-4631

11. CONTRACT GRANT NO.

Agreement T2695 Task 61 13. TYPE OF REPORT AND PERIOD COVERED

Final Research Report 12. SPONSORING AGENCY NAME AND ADDRESS

Research Office Washington State Department of Transportation Transportation Building, MS 47372 Olympia, Washington 98504-7372 14 Doug Brodin, Project Manager, 360-705-7972

14. SPONSORING AGENCY CODE

15. SUPPLEMENTARY NOTES

This study was conducted in cooperation with the University of Washington and the US Department of Transportation 16. ABSTRACT

Real-time speed and truck data are important inputs for modern freeway traffic control and management systems. However, these data are not directly measurable by single-loop detectors. Although dual-loop detectors provide speeds and classified vehicle volumes, there are too few of them on our current freeway systems to meet the practical needs of advanced traffic management systems. This makes it extremely desirable to develop appropriate algorithms to calculate speed and truck volume from single-loop outputs or from video data. To obtain quality estimates of traffic speed and truck volume data, several algorithms were developed and implemented in this study. These algorithms are (1) a speed estimation algorithm based on the region growing mechanism and single-loop measurements; (2) a set of computer –vision-based algorithms for extracting background images from a video sequence, detecting the presence of vehicles, identifying and removing shadows, and calculating pixel-based vehicle lengths for classification; and (3) a speed estimation algorithm that uses paired video and single-loop sensor inputs. These algorithms were implemented in three distinct computer applications. Field-collected video and loop detector data were used to test the algorithms. Our test results indicated that quality speed and truck volume data can be estimated with the proposed algorithms by using single-loop data, video data, or both video and single-loop data. The Video-based Vehicle Detection and Classification (VVDC) system, based on the proposed video image processing algorithms, provides a cost-effective solution for automatic traffic data collection with surveillance video cameras. For locations with both video and single-loop sensors, speed estimates can be improved by combining video data with single-loop data. 17. KEY WORDS

Trucks, Data Collection, Computer Vision, Loop Detectors, Vehicle Classification, Video Image Processing, Speed

18. DISTRIBUTION STATEMENT

19. SECURITY CLASSIF. (OF THIS REPORT)

None 20. SECURITY CLASSIF. (OF THIS PAGE)

None 21. NO. OF PAGES

22. PRICE

Page 3: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

iii

DISCLAIMER

The contents of this report reflect the views of the authors, who are responsible

for the facts and accuracy of the data presented herein. This document is disseminated

through the Transportation Northwest (TransNow) Regional Center under the

sponsorship of the U.S. Department of Transportation UTC Grant Program and through

the Washington State Department of Transportation. The U.S. Government assumes no

liability for the contents or use thereof. Sponsorship for the local match portion of this

research project was provided by the Washington State Department of Transportation.

The contents do not necessarily reflect the views or policies of the U.S. Department of

Transportation or Washington State Department of Transportation. This report does not

constitute a standard, specification, or regulation.

Page 4: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

iv

Page 5: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

v

TABLE OF CONTENTS

Executive Summary ................................................................................................ xi

Part I Research Background...............................................................................1

1.0 INTRODUCTION .........................................................................................................1 1.1 RESEARCH BACKGROUND......................................................................................1 1.2 PROBLEM STATEMENT ...........................................................................................3 1.3 RESEARCH OBJECTIVE ...........................................................................................6

2.0 STATE OF THE ART ...................................................................................................7 2.1 ESTIMATING SPEED AND TRUCK VOLUMES USING SINGLE-LOOP

MEASUREMENTS....................................................................................................7 2.2 VEHICLE DETECTION AND CLASSIFICATION USING VIDEO IMAGE PROCESSING ..10

Part II Speed and Bin-Volume Estimates Using Single-Loop Outputs.....................................................................................................................18

3.0 SINGLE LOOP ALGORITHM DESIGN ...................................................................18 3.1 PROPERTIES OF VEHICLE LENGTH DISTRIBUTION ................................................18 3.2 ALGORITHM DESIGN............................................................................................20 3.3 ALGORITHM IMPLEMENTATION............................................................................30

4.0 SINGLE LOOP ALGORITHM TESTS ......................................................................33 4.1 TEST SITES...........................................................................................................33 4.2 TEST RESULTS AND DISCUSSION..........................................................................34 4.3 SINGLE-LOOP ALGORITHM TEST SUMMARY........................................................37

Part III Video Image Processing for Vehicle Detection and Classification ........................................................................................................38

5.0 VIDEO RESEARCH APPROACH.............................................................................38 5.1 BACKGROUND EXTRACTION ................................................................................38 5.2 VEHICLE DETECTION ...........................................................................................40 5.3 SHADOW REMOVAL .............................................................................................43 5.4 LENGTH-BASED CLASSIFICATION ........................................................................50

6.0 DEVELOPMENT OF THE VIDEO-BASED VEHICLE DETECTION AND CLASSIFICATION SYSTEM ....................................................................................53

6.1 SYSTEM ARCHITECTURE ......................................................................................53 6.2 LIVE VIDEO CAPTURE MODULE...........................................................................55 6.3 USER INPUT MODULE ..........................................................................................57 6.4 BACKGROUND EXTRACTION MODULE .................................................................60 6.5 VEHICLE DETECTION MODULE ............................................................................62 6.6 SHADOW REMOVAL MODULE ..............................................................................64 6.7 VEHICLE CLASSIFICATION MODULE ....................................................................66

7.0 VVDC SYSTEM TESTS AND DISCUSSION ..........................................................67 7.1 TEST CONDITIONS AND DATA..............................................................................67 7.2 OFFLINE TESTS ....................................................................................................69

Page 6: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

vi

7.2.1 The I-5 Test Location.................................................................................69 7.2.2 The SR 99 Test Location ...........................................................................72

7.3 ONLINE TEST .......................................................................................................74 7.4 VVDC SYSTEM TEST SUMMARY.........................................................................77

Part IV Paired Video and Single-Loop Sensors........................................79

8.0 PAIRED VIDEO AND SINGLE-LOOP SENSOR ALGORITHM............................79 8.1 INTRODUCTION ....................................................................................................79 8.2 ALGORITHM DESIGN............................................................................................81

9.0 SSYSTEM DEVELOPMENT FOR PAIRED VIDEO AND SINGLE LOOP SENSORS.........................................................................................................88

9.1 SYSTEM DESIGN...................................................................................................88 9.2 SYSTEM IMPLEMENTATION ..................................................................................90

10.0 TEST OF THE PAIRED VIDEO AND SINGLE-LOOP SYSTEM.....................95 10.1 TEST SITES AND DATA.......................................................................................95 10.2 TEST RESULTS AND DISCUSSION........................................................................96 10.3 TEST SUMMARY FOR THE PAIRED VL SYSTEM ..................................................99

11.0 CONCLUSIONS AND RECOMMENDATIONS ..............................................101 11.1 CONCLUSIONS..................................................................................................101 11.2 RECOMMENDATIONS........................................................................................104

Acknowledgments..................................................................................................106

References..................................................................................................................107

Page 7: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

vii

LIST OF FIGURES

Figure 3-1: Length Distribution of Vehicles on Southbound I-5.......................................19

Figure 3-2: SV and LV Length Distributions with Normal Distribution Curves ..............19

Figure 3-3: Congestion Occupancy Threshold ..................................................................24

Figure 3-4: Single-Loop Region Growing Algorithm Flowchart ......................................26

Figure 3-5: Interval Groups after Region Growing ...........................................................31

Figure 3-6: User Interface of the ST-Estimator System ....................................................32

Figure 3-7: Real-Time Data Window of the ST-Estimator System...................................32

Figure 3-8: ST-Estimator’s Program Settings Interface ....................................................27

Figure 4-1: Estimated vs. Actual Speeds for Region Growing and WSDOT Algorithms with Period Lengths of 3 and 5 Minutes on Lane 2 of SB I-5 at NE 145th St, May 17, 2005 ...............................................................................................................35

Figure 5-1: An Example Video Scene and Its Background...............................................40

Figure 5-2: The Components of the Virtual Detector........................................................41

Figure 5-3: Otsu Method for Shadow Removal on a Bright Vehicle and a Dark Vehicle .........................................................................................................................44

Figure 5-4: Otsu Method for Shadow Removal with a Non-Uniform Cast Shadow.........45

Figure 5-5: A Successful Example of the Region Growing Shadow Removal Method....45

Figure 5-6: An Unsuccessful Example of the Region Growing Shadow Removal Method .........................................................................................................................46

Figure 5-7: Sample of Edge Imaging (Assuming the Bounding Box Includes the Entire Image) ...............................................................................................................48

Figure 5-8: An Example of a Detected Truck Before and After Shadow Removal ..........50

Figure 6-1: Components of the VVDC System .................................................................54

Figure 6-2: Flow Chart of the VVDC System ...................................................................55

Figure 6-3: The Main User Interface of the VVDC System..............................................58

Figure 6-4: The Interactive Configuration Interface..........................................................59

Page 8: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

viii

Figure 6-5: Background Extraction Process ......................................................................61

Figure 6-6: Extracted Background Image..........................................................................62

Figure 6-7: A Snapshot of the VVDC System When a Vehicle is Detected and Classified......................................................................................................................64

Figure 6-8: Detection of Moving Blobs through Background Subtraction .......................65

Figure 6-9: A Step by Step Illustration of the Shadow Removal Process .........................66

Figure 7-1: Southbound I-5 Near the NE 145th Street Over-crossing................................68

Figure 7-2: Northbound SR 99 Near the NE 41st Street Over-crossing.............................68

Figure 7-3: Live Video Display at the STAR Lab.............................................................69

Figure 7-4: Southbound I-5 Near the NE 92nd Street Over-crossing .................................69

Figure 7-5: A Truck Triggered Both Lane 1 and Lane 2 Detectors...................................71

Figure 7-6: A Lane-Changing Vehicle Missed by the VVDC System..............................72

Figure 7-7: A Misclassified Truck with a Color of the Bed Similar to the Background Color ............................................................................................................................72

Figure 7-8: One Vehicle Driving on the Shoulder Did Not Trigger the Detector .............74

Figure 7-9: A Lane-Changing Car Was Missed ................................................................76

Figure 7-10: A Gas Tank Was Misclassified Because of the Large Distance between the Two Containers ......................................................................................................77

Figure 7-11: Truck Over-Count Due to Longitudinal Occlusion ......................................77

Figure 8-1: Flow Chart of the Paired VL Sensor System ..................................................82

Figure 8-2: Schematic of the WSDOT Video Signal Communication System.................83

Figure 9-1: Flow Chart for the Paired VL System.............................................................89

Figure 9-2: The Time Synchronization module for Video and Loop Subsystems ............92

Figure 9-3: A Snapshot of the Speed Estimation in a 20-Second Interval ........................94

Figure 10-1: A Snapshot of the Test Site for the Paired VL System.................................96

Figure 10-2: Comparison between the Observed Speeds and Estimated Speeds at Test Site I .........................................................................................................................98

Page 9: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

ix

Figure 10-3: Comparison between the Observed Speeds and Estimated Speeds at Test Site II ........................................................................................................................98

Page 10: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

x

LIST OF TABLES

Table 1-1: WSDOT Dual-loop Length Classification .........................................................3

Table 3-1: Vehicle Length Distribution Statistics .............................................................20

Table 3-2: Vehicle Length Distribution Statistics by Lane Occupancy Level ..................24

Table 4-1: Site Information and Interval Vehicle Volume Statistics.................................33

Table 4-2: Summary of Speed Estimation Results ............................................................36

Table 4-3: Summary of LV Volume Estimation................................................................36

Table 7-1: Offline Test Results from the I-5 Test Location ..............................................70

Table 7-2: Error Cause Investigation for Offline Test at the I-5 Test Location ................71

Table 7-3: Offline Test Results from the SR 99 Test Location .........................................73

Table 7-4: Error Cause Investigation for the Offline Test on SR 99 .................................74

Table 7-5: Results of the Online Test at Southbound I-5 Near the NE 92nd Street- crossing ..................................................................................................................75

Table 7-6: Error Cause Investigation for the Online Test on Southbound I-5 Near the NE 92nd Street Over-crossing.................................................................................76

Table 8-1: Processing Delay for Each Input Position........................................................84

Table 10-1: Online Test Results from the Two Test Locations.........................................97

Page 11: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

xi

EXECUTIVE SUMMARY

Traffic speed and truck volume data are important variables for transportation

planning, pavement design, traffic safety, traffic operations, and car emission controls.

However, these data are not directly measured by single-loop detectors, which are the

most widely available type of sensor on roadway networks in the U.S. In order to obtain

quality estimates of traffic speed and truck volume data from single-loop detectors and

from video detectors, several algorithms were developed and tested in this study.

First, a new speed estimation algorithm that uses single-loop data was developed.

This algorithm applies the region growing mechanism commonly used in video image

processing. This region growing algorithm, together with a vehicle classification

algorithm based on the Nearest Neighbor Decision (NND) rule, was implemented in the

single-loop Speed and Truck volume Estimator (ST-Estimator) for improved speed and

truck volume data. Test results on the ST-Estimator indicated that the new speed

algorithm achieved much better accuracy than the traditional algorithm used by most

traffic management centers. By using the speed estimated with the new algorithm, long

vehicle (LV) volumes can be estimated for vehicle classification purposes on the basis of

the NND rule. LV volume errors estimated at three test locations in Seattle (the second

lanes at station ES-167D, station ES-172R, and station ES-209D) were within 7.5 percent

over a 24-hour period. The ST-Estimator test results indicated that the ST-Estimator can

be employed to obtain reasonably accurate speed and LV volume estimates at single-loop

stations.

Second, several computer –vision-based algorithms were developed or applied to

extract the background image from a video sequence, detect the presence of vehicles,

Page 12: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

xii

identify and remove shadows, and calculate pixel-based vehicle lengths for classification.

These algorithms were implemented in the prototype Video-based Vehicle Detection and

Classification (VVDC) system by using Microsoft Visual C#. As a plug and play system,

the VVDC system is capable of processing live video signals in real time. The VVDC

system can also be used to process digitized video images in the JPEG or BMP formats.

Because the VVDC system does not require camera calibration, it can be easily applied to

locations with surveillance video cameras. Also, users are allowed to specify the bin

threshold to collect desired types of vehicles. The VVDC system was tested at three test

locations under different traffic and environmental conditions. The accuracy for vehicle

detection was above 97 percent, and the total truck count error was lower than 9 percent

for all three tests. This suggests that the video image processing method developed for

vehicle detection and classification in this study is indeed a viable alternative for truck

data collection. However, the prototype VVDC system is currently designed to work in

the daytime and under conditions without longitudinal vehicle occlusion and severe

camera vibration.

Third, a speed estimation algorithm that uses paired video and single-loop sensor

inputs was designed. The core idea of this algorithm was to use a video sensor to screen

out intervals with LVs before single-loop measurements were applied to the traditional

algorithm for speed estimation. Because the presence of LVs violates the uniform

vehicle length assumption for the traditional algorithm of speed estimation, intervals

containing LVs must be properly addressed to avoid speed estimation bias. The paired

video and single-loop sensors rely on video image processing for LV detection and

single-loop data for speed calculation. If an interval is identified to contain one or more

Page 13: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

xiii

LVs, its single-loop measurements are dropped from the speed estimation. Instead, the

most recently calculated interval speed is assigned to this interval. The paired video and

single-loop algorithm was implemented in the Paired VL system. Evaluation of the

Paired VL system showed that speeds estimated by this system were more accurate than

speeds estimated with the traditional algorithm. However, finding a location with both

video and single-loop sensors may not be easy. Also, time synchronization for the Paired

VL system is very challenging and detection errors from the video sensor may

significantly degrade the performance of the Paired VL system. All these factors cast

doubt over the applicability of the Paired VL system, although the effectiveness of the

idea was demonstrated in this study.

In summary, several algorithms and corresponding computer tools for improved speed

and truck data were developed during this study. The authors conclude that quality speed

and truck volume data can be estimated from single-loop data by applying the ST-

Estimator. Although the prototype VVDC system works only under relatively ideal

conditions, the utility and effectiveness of the system were demonstrated in this study.

Given that surveillance video cameras have been increasingly deployed in recent years,

the VVDC system can be a cost-effective solution for turning these surveillance video

cameras into video detectors when necessary. For locations with both video and single-

loop sensors, speed estimates can be improved by combining video data with single-loop

data.

Page 14: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

xiv

Page 15: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

1

PART I RESEARCH BACKGROUND

1.0 INTRODUCTION

1.1 RESEARCH BACKGROUND

Traffic speed is one of the most important variables for traffic operations and

control. It is both a potential sign of problems on the roadway and a good measure of

system effectiveness. Many incident detection algorithms are based on traffic speed data.

Speed variation is also a good indicator of traffic safety (Anderson and Krammes, 2000).

If good network-wide speed information is available, the travel time for any origin-

destination pair can be calculated.

Data concerning trucks and heavy vehicles are important for several reasons.

Because of their heavy weight and large turning radii, long vehicles (LVs) have very

different moving characteristics than short vehicles (SVs), which are mostly passenger

cars. This affects a roadway’s geometric design factors, such as horizontal alignment and

curb heights. The heavy weight of such vehicles is also an important factor in pavement

design and maintenance, as truck volumes influence both pavement life and design

parameters (AASHTO, 2004). Roadway performance is influenced by the presence of

large and/or low-performance vehicles in the traffic stream because they reduce roadway

capacity (Cunagin and Messer, 1983). The Highway Capacity Manual (TRB, 2000)

explicitly stipulates that passenger-car equivalents of LVs under different conditions

should be used for highway design. Safety is also influenced by LVs. A recent study

found that 8 percent of fatal vehicle-to-vehicle crashes involved large trucks, although

only 3 percent of all registered vehicles were large trucks (NHTSA, 2004). Recent

studies (Peters et al., 2004; Kim et al., 2004) also found that particulate matter (PM) is

Page 16: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

2

strongly associated with the onset of myocardial infarction and respiratory symptoms.

Heavy duty trucks that use diesel engines are major sources of PM, accounting for 72

percent of traffic-emitted PM (EPA, 2001).

All these facts illustrate that good speed and truck volume data are extremely

important for accurate analysis of traffic safety, traffic pollution, and flow characteristics

in transportation planning, management, and engineering. They are also important inputs

for advanced traffic management systems (ATMS) and advanced traveler information

systems (ATIS). Additionally, truck volume data are needed by federal and state

transportation organizations to adequately monitor and analyze our nation’s freight

movements.

The Washington State Department of Transportation’s (WSDOT’s) dual-loop

detection system classifies vehicles into four bins according to their lengths. The four

length categories are described in Table 1-1. Because of variations in the lengths of

vehicles within specific FHWA vehicle classes, the four WSDOT length classes do not

directly relate to the 13 FHWA vehicle classes (Hallenbeck, 1993). Typically, vehicles 40

ft and longer are referred to as LVs (Wang and Nihan, 2003; Kwon et al., 2003), and

those shorter than 40 ft are referred to as short vehicles (SVs). The majority of LVs on

Seattle area freeways are trucks. Hence LVs and trucks are used interchangeably in this

report.

Page 17: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

3

Table 1-1: WSDOT Dual-loop Length Classification Class Length Range (feet) Vehicle types

Bin 1 Less than 26 Cars, pickups, and short single-unit trucks

Bin 2 From 26 to 39 Cars and trucks pulling trailers, long single-unit trucks

Bin 3 From 40 to 65 Combination trucks

Bin 4 Longer than 65 Multi-trailer trucks

1.2 PROBLEM STATEMENT

Since its introduction in the early 1960s, the inductance loop detector has become

the most popular form of vehicle detection system (ITE, 1997). Many freeway corridors

contain single-loop detectors for collecting volume (the number of vehicles passing per

unit time) and lane occupancy (the fraction of some total time interval that a loop is

occupied by vehicles) data. These data are valuable sources for transportation planning

and traffic operations. However, recent developments in ATMS require increasingly

more accurate and timely speed and vehicle-classification data, which are not directly

measurable by single-loop detectors. To obtain such speed and vehicle-classification

data, dual-loop detectors are typically employed.

A dual-loop detector is formed by two consecutive single-loop detectors separated

by several meters. It is also called a speed trap or double-loop detector. Because a dual-

loop detector is capable of recording the time for a vehicle to traverse from the first loop

to the second loop, and the distance between the two loops is predetermined, a dual-loop

detector can calculate the speed of a vehicle fairly accurately. By applying the calculated

speed and single-loop measured lane occupancies, the length of a vehicle can also be

estimated, and the vehicle can be assigned to a certain class on the basis of its length.

However, although dual-loop detectors are ideal for collecting speed and vehicle-

Page 18: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

4

classification data, there are too few of them on our current freeway systems to meet

practical ATMS and ATIS needs, and the cost of upgrading a single-loop detector to a

dual-loop detector is high. According to the experience of the WSDOT, the cost for

upgrading from a single-loop detector to a dual-loop detector ranges from $3250 to

$5750 (includes $750 direct cost for loop placement and $2500 - $5000 indirect cost

caused by lane closure) (Wang and Nihan, 2003). In addition, most dual-loop detectors

deployed in the greater Seattle area are reported to have serious under-count or over-

count problems for bin volumes (Zhang et al., 2003). Therefore, making existing single-

loop detectors capable of providing better speed and vehicle-classification data is of

practical significance for traffic researchers.

To meet ATMS and ATIS needs, new sensors that are capable of collecting speed

and truck volume data have been developed in recent years. Among these new sensors,

video image processors (VIPs) are noteworthy. These systems offer the advantage of

preserving a continuous stream of information rather than recording discrete vehicle

passages, as in most other detection systems. Examples of such programs include the

Vantage Express system developed by Iteris, Inc. and the VideoTrack system developed

by Peek Traffic Inc. These systems can operate during both daytime and nighttime

conditions. Some of these systems claim to be capable of detecting vehicles in

unfavorable weather conditions. However, the cost for such systems is significant, and

they require calibrated camera images to work correctly. Calibrating these systems

normally requires very specific road surface information (such as the distance between

recognizable road surface marks) and/or camera information (such as the elevation and

tilt angle), which may not be easy to obtain (Avery et al., 2004). Furthermore, recent

Page 19: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

5

studies (Bonneson and Abbas, 2002; Martin et al., 2004; Rhodes et al, 2005) that

evaluated some of these commercial systems found that shadows and headlight

reflections generated significant false detection problems (a false detection occurs when a

“no” event is recorded as a “yes”) and early detections. These commercial systems

typically require concurrent installation of proprietary hardware and software, especially

for intersection video traffic detection. Proprietary equipment prevents agencies from

modifying or improving the algorithms used in traffic detection to better suit their needs.

Although some vendors do allow for flexibility in hardware selection, the software

remains immutable in its treatment of traffic detection and underlying assumptions.

The aforementioned commercial systems are not the only ones that require

calibration. According to Tian et al. (2002), all the available video-image systems

require calibration of field of view based on field measurements of certain geometric

roadway elements before the data collection process can be initiated. This calibration

requirement leads to problematical system inflexibility – if the camera position is

changed, the calibration measurements may need to be retaken. Therefore, cameras that

provide input to VIPs are normally fixed.

In the greater Seattle area, over 250 surveillance video cameras have been

installed along major freeways. These video cameras are typically used by traffic

operators, who can pan, tilt, and zoom the camera view to monitor traffic conditions. To

accommodate the need for various screen views, these cameras are generally not

calibrated. For many locations, road surface marks are not at all available for calibrating

these cameras. Consequently, none of these surveillance video cameras have been used

for automatic data collection with VIPs.

Page 20: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

6

1.3 RESEARCH OBJECTIVE

Considering that single-loop detectors are still the major source of live traffic data

and that surveillance video cameras have been widely deployed along urban freeways,

this research aimed to improve truck and speed data by using existing single-loop sensors

and surveillance video cameras. Specifically, we had the following three objectives:

• design and implement a new algorithm that uses single-loop measurements for

speed and truck data estimation

• develop a prototype plug and play Video-based Vehicle Detection and

Classification (VVDC) system for truck data collection that would use un-

calibrated video images

• explore the feasibility of pairing video and single-loop sensors for better speed

estimates. Develop and test a computer application that combines single-loop

measurements and vehicle length calculated by the VVDC system for improved

speed calculation.

Page 21: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

7

2.0 STATE OF THE ART

2.1 ESTIMATING SPEED AND TRUCK VOLUMES USING SINGLE-LOOP MEASUREMENTS

As mentioned earlier, a single-loop detector merely measures volume and lane

occupancy. Algorithms have been proposed to estimate traffic speed and truck volumes

with single-loop measurements. One of the earliest investigations into estimating speed

from single-loop outputs began with the landmark speed estimation formula proposed by

Athol (1965), which was further examined by Mikhalkin et al. (1972), Gerlough and

Huber (1975), and Courage et al (1976) and has been the principal equation for many

subsequent works:

giOTiViss ⋅⋅

=)()()( (2-1)

where i = time interval index; ss = space mean speed in mph for each interval V = vehicles per interval O = lane occupancy in percentage of time the detector is occupied T = the number of hours per interval g = speed estimation parameter with units of 100-mile-1.

The speed estimation parameter, g is often regarded as a constant that converts the

occupancy into density, and the space mean speed is then calculated by the fundamental

relationship between volume, density, and space mean speed. Because both traffic

volume and lane occupancy are direct measurements from single loops, and T is a known

variable from the system configuration, assuming a constant value of g for speed

estimation with Equation (2-1) is simple and has been employed by many state

departments of transportation. The Chicago Traffic Systems Center uses g = 1.90

(Aredonk, 1996), and WSDOT used g = 2.4 for a number of years until recently.

Page 22: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

8

WSDOT currently uses nighttime traffic to calibrate g periodically, but this still results in

a constant g between calibrations. Currently, it is common practice in most transportation

agencies to use speed estimation algorithms that apply the speed formula in Equation (2-

1) with a constant g. (In this report, such algorithms are called “traditional algorithms.”)

However, because the value of g is determined by the Mean Effective Vehicle Length

(MEVL), which is approximately the sum of mean vehicle length and detector length as

shown in Equation (2-2), it stays constant only when the MEVL does not change from

interval to interval (Wang and Nihan, 2000):

)(80.52)(

iMEVLig = (2-2)

where MEVL is in feet. If vehicle composition changes with time, the MEVL will vary

considerably from interval to interval, and use of a constant g is not appropriate.

A number of researchers have proposed speed estimation methods independent of

Equation (2-1). Pushkar et al. (1994) utilized a cusp catastrophe theory model to estimate

average speeds at a location and compared the results to those obtained from a dual-loop

sensor at the same location. Petty et al. (1998) utilized a stochastic traffic model based

on assigning a common probability distribution of travel times to vehicles arriving at an

upstream point during a given interval. This method, however, depends on disaggregate

loop data (1-second polling intervals were used in the study); thus, such a method is not

applicable where even modest data aggregation is performed. Dailey (1999) used a

Kalman filter to account for what he considered to be random error in the measurements.

Although the speed estimates were reasonable, Coifman (2001) noted that the source of

this random error was not well specified.

Page 23: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

9

Sun and Ritchie (1999) used inductive waveform outputs from new loop detector

cards combined with signal processing and statistical analysis to anonymously identify

vehicles between detectors and estimate speed. They further demonstrated that the

method is transferable without the need for recalibration; however, the method requires

considerable investment to update the loop hardware and is thus not feasible for existing

installations. Coifman (2001) used an exponential filtering method that could be

implemented in a type 170 controller to estimate speed. He also re-addressed the

distinction between space mean speed and time mean speed and offered the possibility of

examining dual-loop detector stations. However, with the notable exception of

Coifman’s exponential filter, all of the filtering and modeling methods discussed above

suffer from some common drawbacks. Some require calibration at each collection site,

while others make use of additional data not typically collected at single-loop stations.

Other researchers have focused on developing algorithms to avoid the speed

estimation bias caused by using a constant g. Hellinga (2002) proposed an algorithm that

uses dual-loop measured vehicle lengths to calculate g and applies the obtained g value to

estimate speeds at adjacent single-loop stations. Because some dual-loop detectors are

required, this algorithm is only suitable on freeways with mixed detectors of single loops

and dual loops. Kwon et al. (2003) used an MEVL representative of short vehicles to

estimate traffic speed across lanes and correlate those speeds to estimate truck volumes.

This algorithm requires one truck-free lane (such as an HOV lane) and also imposes an

assumption of cross-lane speed correlation. This assumption may not hold during

congested periods when the speed in the high occupancy vehicle lane may be

considerably different from those of neighboring lanes. Coifman et al. (2003) proposed

Page 24: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

10

using the median vehicle length rather than the mean vehicle length to estimate speed by

noting that the median is less sensitive to outliers than the mean and thus limits the

impact of long vehicles on speed estimation. Wang and Nihan (2003) developed an

algorithm to screen out intervals that may contain long vehicles from the speed

calculation. Because passenger car lengths do not vary greatly, a constant g value

corresponding to the average SV length can be used to estimate speed.

Relatively little work has been done to identify LVs by using single-loop data.

Several of the previously discussed methods can also produce LV volume estimates. Sun

and Ritchie (1999) utilized waveforms to identify LVs; however, as mentioned before,

few currently deployed loop detectors can produce such waveform data. The method

developed by Kwon et al. (2003) estimates LV volumes as well, but also relies on the

assumption of cross-lane speed correlation, which may fail during congested periods.

Cherrett et al. (2000) used the interval average occupancy per vehicle to identify LVs at

speeds as low as 15 km/h. Wang and Nihan (2003) developed a distance weighted

Nearest Neighbor Decision rule based on long and short vehicle population distributions

of lengths to obtain favorable LV volumes. The inherent nature of this classification rule

seems well-suited to LV classification.

Once the mix of SVs and LVs is known for a particular time interval, the MEVL

can be calculated. Then speed can be estimated by using Equation (2-1), and the g value

can be calculated from the MEVL.

2.2 VEHICLE DETECTION AND CLASSIFICATION USING VIDEO IMAGE PROCESSING

Computer vision is not an entirely new concept for vehicle detection and

classification; many agencies began investigating the possibilities of video detection 15

Page 25: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

11

or more years ago. The first systems, however, were unable to function adequately under

a variety of environmental conditions. Shadows affected detection, nighttime detection

was troublesome, and poor weather obscured vehicles. Therefore, many agencies

continued to use loop detector systems or considered other detection technologies, such

as radar (Weber, 1999). Over the years, however, many improvements have been made

as advances in computer technology and image processing algorithms have been applied

in the traffic detection arena. Early video detection research (Michalopoulos 1991) at the

University of Minnesota resulted in the Autoscope video detection systems that are

widely used in today’s traffic detection and surveillance operations around the world.

This section provides a brief overview of the state of the art in computer vision for traffic

applications, focusing on shadow removal and length-based classification techniques.

2.2.1 Shadow Removal Techniques for Traffic Applications

The majority of research on removal of shadows from images has been performed

in the fields of computer science and electrical engineering. One of the earliest

investigations in shadow removal was done by Scanlan et al. (1990). They split an image

into square blocks and produced an image based on the mean intensity of each block.

The median intensity of the mean values was then used as a basis for scaling all blocks

below the median to the median value. The authors noted that this method is appropriate

only for images where the objects of interest occupy the higher end of the intensity range.

Thus, the method would not be suited for situations in which the objects of interest

occupy the lower end of the intensity range (Fung et al., 2002). This method also

introduces some loss of contrast and tends to cause “blocking” (Wang et al., 2004).

Page 26: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

12

Gamba et al. (1997) built a shadow model based upon images from a monocular

color image sequence. The authors noted that shadows in a scene interact with still

portions of the scene and that these are more like each other than they are like the target

objects of interest. With this in mind, they used the hue, luminosity, and saturation values

to construct a reference image for the shadow model. The shadows present in the

reference image were used as a model for moving cast shadows. However, because the

reference image may not always contain enough still shadows to provide an accurate

model, they also constructed a strip bitmap model to improve the shadow model. In this

strip bitmap model, the image was split into a number of horizontal strips to be analyzed

separately, since luminosity values change with respect to distance from the camera

(distant shadows appear lighter than closer shadows). Although the number of

misclassified pixels was low, the algorithm was only tested on one scene at a supermarket

parking lot. Furthermore, there was an implicit assumption that shadows are cast on the

same kind of surface, which may not hold true for a variety of outdoor scenes (Fung et

al., 2002).

Gu et al. (2005) implemented a biological approach to shadow removal. Noting

synchronous pulse bursts in the visual cortex of cats, they implemented a Pulse Coupled

Neural Network (PCNN) to simulate this effect for the removal of shadows on the basis

of optimization of the linking strength. The results indicated that shadows were

satisfactorily removed for images that did not contain high degrees of noise.

Hsieh et al. (2004) performed shadow removal to improve the accuracy of a

person-tracking system. Their shadow removal method was based on the assumption that

shadows have less variation in chromaticity and luminance. The tracked area was

Page 27: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

13

decomposed by a wavelet transform and projected onto low and high frequency

components to identify areas of low frequency that were considered to be shadow. The

algorithm was able to perform satisfactorily even when the tracked people wore colors

similar to that of the background.

Recognizing that many shadow removal algorithms produce distorted and noisy

results that misrepresent the shape of the original object, Xu et al. (2005) set out to fix

these distortion errors. They presented a shadow removal algorithm based upon

inspection of color and texture. The unique part of their work was the introduction of

morphological operations upon the blobs remaining after shadow removal to reconstruct

the shadow-removed object shape on the basis of the shape of the object before shadow

removal. The algorithm performs well except in cases of very large cast shadows.

Correcting the brightness threshold used in the paper to account for larger shadows would

improve the results but would also introduce false positive shadow pixels.

Fung et al. (2002) proposed a statistical shadow removal algorithm based upon

construction of a probability map called the Shadow Confidence Score (SCS). The score

was based on investigation of luminance, chromaticity, and gradient density. The cast

shadow was determined to be those regions with high SCS values that were outside of the

convex hull of the vehicle edge. The algorithm was tested on a variety of vehicle types

and colors in different lighting conditions and viewing angles; the algorithm achieved an

error rate of 14 percent. Motorcycles and vehicles with color similar to the background

caused the highest rates of error. In the case of smaller vehicles, the error could largely

be attributed to the use of a convex hull to represent the object, since smaller vehicles and

motorcycles have outlines that are not very well preserved by a convex hull.

Page 28: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

14

Noting that the performance of traditional Bayesian Networks deteriorates with

highly varying input data, Lo et al. (2003) developed an adaptive Bayesian Network to

avoid the problems of their static counterparts. This was accomplished through the

development of an efficient means to capture the variation in subsequent input images.

This information was then used to adjust the network parameters. The performance was

evaluated against a static Bayesian Network, and the adaptive network performed better.

Wang et al. (2004) proposed a three-step process for removing shadows from a

foreground object obtained after subtraction of an image from a background image. The

first step is illumination assessment, in which the foreground region is analyzed on the

basis of pixel intensity and energy to determine whether it contains any shadow. If a

shadow is suspected to exist on the basis of aggregate statistics of bright and dark pixels,

the shadow detection step is performed. The direction of illumination is found via the

Otsu method (Otsu, 1979) over the boundary pixels. Points near the boundary in the

direction of illumination are sampled to derive shadow attributes. Object areas are

recognized by subtracting the edge image of the background from the edge image of the

foreground object. Areas with remaining edges are considered to be the object area. In

the final step, the object is recovered by using information from the object area and

shadow attributes to construct the object. Foreground pixels with intensity values greater

than the background, or those with characteristics different from the shadow attributes,

are preserved. To preserve self-shadow areas, where pixels have characteristics similar to

those of the cast shadow, pixels close to the object area are also preserved. Finally, any

holes in the object area are filled. Scant experimental results were provided, limiting the

ability to evaluate this method. Furthermore, the method used to find the direction of

Page 29: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

15

illumination may fail if the shadow has a halo effect at the edge (pixels of high intensity

at the boundary of the shadow).

An excellent survey and evaluation of many moving-shadow removal algorithms

can be found in Prati et al. (2003).

2.2.2 Length Classification Techniques for Traffic Applications

There has been considerable interest in vehicle detection via computer vision,

especially in crash-avoidance systems and other driver assistance systems; comparatively,

there has been much less interest in vehicle classification via computer vision.

Nevertheless, several investigations into vehicle classification via computer vision have

been conducted recently. Lai et al. (2001) demonstrated that vehicle dimension could

indeed be accurately estimated from a single camera angle through the use of a set of

coordinate mapping functions. Through the use of a shadow removal method (important

to maintain true vehicle dimensions) and a convex hull to produce a vehicle mask, they

were able to estimate vehicle lengths to within 10 percent in every instance. Their

method, however, requires camera calibration to map image angles and pixels into real-

world dimensions. Furthermore, few vehicle types were tested, and the algorithm was

applied in only one location.

Gupte et al. (2002) performed similar work by tracking regions—using the fact

that all motion occurs in the ground plane—to detect, track, and classify vehicles. Before

vehicles can be counted and classified, the authors’ program must determine the

relationship between the tracked regions and vehicles (e.g., a vehicle may have several

regions, or a region may have several vehicles). In a 20-minute trial of the program, 90

percent of all vehicles were properly detected and tracked, and 70 percent of those

Page 30: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

16

vehicles were properly classified. Unfortunately, their work did not address problems

associated with shadows, so application of the algorithm is limited at the current stage.

Hasegawa and Kanade (2005) developed a system capable of detecting and

classifying moving objects by both type and color. Vehicles from a series of training

images were identified by an operator to develop the characteristics associated with each

object type. The features used were mostly geometric, including bounding box

dimensions, the centroid coordinate, and the area of the background-subtracted area.

Linear discrimination analysis and a weighted K-Nearest Neighbor rule were used to

assign presented vehicles to an object type. The color of each object was identified in a

similar manner. In a test of 180 presented objects, 91 percent were correctly identified.

A major disadvantage of the system is the requirement for training images from the

location of interest. Furthermore, although color information may be helpful for

identification and tracking of the same vehicle between images, it is not clear that such

information is of interest for data collection; there are few if any assumptions about color

alone that can lead to reliable vehicle classification.

Rad and Jamzad (2005) developed a program to count and classify vehicles as

well as identify the occurrence of lane-changes through tracking. Their approach utilized

a background subtraction approach combined with morphological operations to identify

moving vehicle regions. Bounding boxes were obtained for each vehicle, upon which an

occlusion analysis and classification were based. Boxes with a narrow width were

determined to be motorcycles or bicycles. However, the bounding box characteristics

were not sufficient to identify buses and heavy vehicles, so it was assumed that vehicles

with lower speeds were buses and heavy vehicles. The report did not explain whether

Page 31: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

17

speed estimation was performed or obtained via another detection system. Furthermore,

this assumption is easily violated, especially in congested situations. Although favorable

results were reported, only region measurement, splitting, and losses in tracking were

analyzed, while the accuracy of vehicle detection and classification were not measured.

Graettinger et al. (2005) used video data collected from an Autoscope Solo Pro

commercial detection system to provide classifications corresponding to the 13 FHWA

vehicle classes. Noting that the Autoscope system can only produce five distinct length

classification categories, they applied a disaggregation model that is typically used in

stochastic hydrology to produce finer rainfall estimates from yearly rainfall data. They

demonstrated the ability to produce FHWA-compliant classifications from as few as two

Autoscope-based length categories. The method was tested at one location and validated

at four other sites. Ground truth data were obtained via axle counters, and an overall

misclassification rate of 3.4 percent was achieved. Interestingly, the authors noted that

accuracy decreased when a higher number of length categories within the Autoscope

program was used, so typically the minimum of two classifications was used. Accuracy

decreased notably when a generic model was used because of the large dependency upon

mean volumes during model training. However, use of site-specific models would be

less feasible because new models for each location would have to be developed.

The studies described above provided valuable insights into the video-based

vehicle detection and classification problems addressed in this study.

Page 32: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

18

PART II SPEED AND BIN-VOLUME ESTIMATES USING SINGLE-LOOP OUTPUTS

3.0 SINGLE-LOOP ALGORITHM DESIGN

In this study, we developed a region growing algorithm to filter the data of all

intervals in a period to identify intervals containing only SVs. Note that the terms

“period” and “interval” are used with significant distinction here. An interval indicates

the duration of a single volume or occupancy measurement and is predetermined by the

loop detection system (20 seconds for the WSDOT loops used in this study). The term

period indicates the sum of interval times for the total number of intervals observed.

Once the SV-only intervals are identified, Equation (2-1) is applied by using a

constant g value based upon the SV mean length, as reported by Wang and Nihan (2003).

LV volumes are then estimated by using the Nearest Neighbor Decision rule proposed in

the same study.

3.1 PROPERTIES OF VEHICLE LENGTH DISTRIBUTION

We used the vehicle length distribution findings reported in Wang and Nihan

(2003) to address the problem of obtaining speed estimates from single-loop data. Using

data from a dual-loop detector (ES-163R: MMS___T3) in lane 3 of southbound I-5 at NE

130th St. from May 3 to May 16, 1999, Wang and Nihan found that the vehicle length

distribution was clearly bimodal, as illustrated in Figure 3-1. The bimodal distribution of

vehicle length indicated that vehicles can be naturally divided into two classes,

corresponding to the SV class and the LV class, according to their lengths. This vehicle

length distribution feature was verified by Kwon et al. (2003). Upon separation of the

population into two groups at a length of 40 ft (12.2 m), Wang and Nihan (2003)

Page 33: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

19

regarded the two resulting sub-distributions representing the SV and LV groups as

normally distributed. These SV and LV length distributions with the associated normal

distribution curves are illustrated in Figure 3-2.

29.525.5

31.417.3

13.39.25.11.1

SVs LVs

60

50

40

30

20

10

0

Perc

ent

Length in meter

29.525.5

31.417.3

13.39.25.11.1

SVs LVs

60

50

40

30

20

10

0

Perc

ent

60

50

40

30

20

10

0

Perc

ent

Length in meter

Figure 3-1: Length Distribution of Vehicles on Southbound I-5 (Wang and Nihan, 2003)

Vehicle Length (m)12.09.06.03.0

Perc

ent

0.0

10.0

20.0

30.0

40.0 Normal curve

Vehicle Length (m)30272421181512

0.0

10.0

20.0

Perc

ent

Normal curve

Vehicle Length (m)12.09.06.03.0

Perc

ent

0.0

10.0

20.0

30.0

40.0 Normal curve

Vehicle Length (m)12.09.06.03.0

Perc

ent

0.0

10.0

20.0

30.0

40.0 Normal curve

Vehicle Length (m)30272421181512

0.0

10.0

20.0

Perc

ent

Normal curve

Vehicle Length (m)30272421181512

0.0

10.0

20.0

Perc

ent

Normal curve

Figure 3-2: SV and LV Length Distributions with Normal Distribution Curves (Wang and Nihan, 2003)

Page 34: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

20

Therefore, SV lengths are assumed to follow the ),( 2svsvN σμ distribution, and LV

lengths to follow the ),( 2lvlvN σμ distribution, where μsv and 2

svσ are the mean and

variance of SV lengths, respectively, and μlv and 2lvσ are the mean and variance of LV

lengths, respectively. The descriptive statistics for the two populations appear in Table 3-

1. The standard deviation of SV lengths is σSV = 2.85 ft, about one fourth of that of LV

lengths (σLV = 11.78 ft). This indicates that SV lengths vary narrowly in comparison to

LV lengths. These vehicle length distribution features are used for separating intervals

containing only SVs from those that contain LVs.

Table 3-1: Vehicle Length Distribution Statistics

SV Class LV Class ft (m) ft (m)

Mean 17.98 (5.48) 73.82 (22.5) Std. Deviation 2.85 (0.87) 11.78 (3.59) Minimum 6.00 (1.83) 40.00 (12.19) Maximum 39.01 (11.89) 98.98 (30.17) Observations 4443 472

3.2 ALGORITHM DESIGN

3.2.1 Grouping Intervals

The design of our speed estimation algorithm is based on a revised “region

growing” concept. Region growing is a technique traditionally used in image

segmentation applications that allows computer vision systems to separate areas of an

image into regions depending on criteria of interest, such as color or texture. Shapiro and

Stockman (2001) stated that “a region grower begins at a position in an image and

attempts to grow each region until the pixels being compared are too dissimilar to the

Page 35: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

21

region to add them.” The position at which region growing begins is known as the seed

for the algorithm. This same idea can be used to discern between intervals containing

only SVs and those containing LVs. One characteristic of region growing is that the

statistics used for determining membership in a region are updated each time a new

member is added to the region. Applying this concept to speed estimation, we first group

m 20-second intervals into a period of length m/3 minutes (for example, if m=15

intervals, then the length of a period is 5 minutes). All the m interval data are then

processed simultaneously to identify intervals with only SVs. Intervals with no vehicles

present are eliminated from the analysis. The occupancy per vehicle (O/V) is calculated

for each remaining interval, and the periods are sorted in order of ascending occupancy

per vehicle to prepare for region growing.

Once the periods are sorted, it is assumed that the smallest non-zero O/V value

consists of only SVs, which will serve as the seed for the region growing algorithm.

Wang and Nihan (2003) found that an assumption that the smallest two non-zero O/V

intervals contain only SVs was violated less than 3 percent of the time when 5-minute

periods with the typical traffic composition (about 10 percent LVs) on I-5 were used.

Therefore, the assumption that only the first interval contains only SVs would be violated

even less frequently. The group occupancy per vehicle (GOV) is calculated by using the

occupancy and volume measurements for all intervals already identified as being in the

group (i.e., let interval x be the last identified interval in the group):

=

== x

i

x

i

iV

iOGOV

1

1

)(

)( (3-1)

Then the occupancy per vehicle ratio (ΔO/V) of interval x+1 is calculated:

Page 36: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

22

GOV

xVxOVO

)1(/)1(/

++=Δ (3-2)

The calculated ΔO/V will be compared with a statistically determined parameter to decide

whether interval x+1 should be accepted as part of the current group or a new group

should be started. There is no limit on the number of groups it may generate. Under the

assumption of constant speed over the m-interval period, the vehicle length distribution

properties noted previously can be translated into occupancy distributions. Traditionally,

the region growing model uses a hypothesis-testing t-statistic as a basis for inference of

group membership at a specified confidence level. In this case, because the SV and LV

length distributions are known to be normal, the threshold h is based on a normal

distribution instead of the student-t distribution. The confidence level is chosen to

equalize the probabilities of acceptance of an interval with LVs and rejection of an SV-

only interval. Thus, the greatest amount by which the ΔO/V value of an interval can differ

from the mean while the interval is still accepted as an SV-only interval is given by:

1)()(

)()(

)( +=⋅

⎟⎟⎠

⎞⎜⎜⎝

⎛+

=iV

ZiV

iVZiV

ihsv

sv

sv

svsv

μσ

μ

σμ (3-3)

where: svμ is the mean SV length

svσ is the standard deviation of SV lengths Z is the Z-statistic corresponding to the chosen confidence level for a standard

normal distribution )(iV is the number of vehicles (observations) in the interval.

For the vehicle distribution used in this report, the Z-statistic was chosen to

equalize the probability of mis-assigning an SV as an LV with that of mis-assigning an

Page 37: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

23

LV as an SV, which was found to be 3.817 standard deviations larger than the short

vehicle mean.

One remaining factor, congestion, is taken into consideration before each interval

is classified into a group. It is expected that, during congested periods, the mapping of

interval occupancies onto the vehicle length distribution will be more prone to error.

This is because, although the vehicle length distribution remains unchanged, interval

occupancy levels will increase considerably during these periods, violating the constant

speed assumption. For example, Wang and Nihan (2003) found that for a period length

of m = 15 (5 minutes), actual speeds in the period varied by more than 15 percent in 46

out of 288 periods in the day, representing 16 percent of the periods. In particular, it is

expected that the variances will be considerably affected by the increased variability of

the data set. Accounting for these congested periods can help to relax the constant speed

assumption and provide better results. Using loop occupancy as a surrogate for

congestion, a brief study was conducted to measure the standard deviations of vehicle

lengths at different loop occupancy levels. Loop event data were collected by the

ALEDA (Cheevarunothai et al., 2005) system on southbound I-5 at 145th St on October

25-26, 2004. Figure 3-3 shows that congestion was clearly evident at loop occupancy

levels of 20 percent and above, and the results presented in Table 3-2 indicate that at

these occupancy levels, the standard deviations of length tended to be twice as large as

those observed for low occupancy levels. Therefore, whenever the average loop

occupancy for a period exceeds 20 percent, the Z-statistic in Equation (3-3) is doubled to

account for the increased uncertainty in variance.

Page 38: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

24

Figure 3-3: Congestion Occupancy Threshold

Table 3-2: Vehicle Length Distribution Statistics by Lane Occupancy Level

Occupancy: 0-10% 10-15% 15-20% 20+% Observations 8088 11412 4502 2732 Minimum, ft (m) 38.82 (11.83) 12.26 (3.74) 11.63 (3.54) 8.62 (2.63) Maximum, ft (m) 87.27 (26.60) 100.08 (30.51) 77.37 (23.58) 69.93 (21.31) Median, ft (m) 63.42 (19.33) 58.34 (17.78) 53.74 (16.38) 30.39 (9.26) Average, ft (m) 61.44 (18.73) 59.84 (18.24) 53.41 (16.28) 32.27 (9.84) Std. Dev., ft (m) 4.74 (1.44) 5.13 (1.56) 8.09 (2.47) 9.47 (2.89)

LAN

E 2

Multiplier 1.08 1.71 2.00

Observations 10858 8515 4951 3068 Minimum, ft (m) 15.52 (4.73) 29.09 (8.87) 11.09 (3.38) 5.68 (1.73) Maximum, ft (m) 99.17 (30.23) 87.27 (26.60) 77.37 (23.58) 63.80 (19.44) Median, ft (m) 63.42 (19.33) 63.42 (19.33) 53.74 (16.38) 29.09 (8.87) Average, ft (m) 63.44 (19.34) 61.71 (18.81) 52.90 (16.12) 30.30 (9.24) Std. Dev., ft (m) 4.58 (1.40) 4.91 (1.50) 8.26 (2.52) 9.37 (2.85)

LAN

E 3

Multiplier 1.07 1.80 2.04

Observations 18946 19927 9453 5800 Minimum, ft (m) 15.52 (4.73) 12.26 (3.74) 11.09 (3.38) 5.68 (1.73) Maximum, ft (m) 99.17 (30.23) 100.08 (30.51) 77.37 (23.58) 69.93 (21.31) Median, ft (m) 63.42 (19.33) 58.34 (17.78) 53.74 (16.38) 30.30 (9.24) Average, ft (m) 62.59 (19.08) 60.64 (18.48) 53.14 (16.20) 31.23 (9.52) Std. Dev., ft (m) 4.75 (1.45) 5.12 (1.56) 8.18 (2.49) 9.46 (2.88)

CO

MB

INED

Multiplier 1.08 1.72 1.99

Page 39: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

25

To determine group membership, the occupancy per vehicle ratio is compared to

the allowable relative difference calculated by Equation (3-3) as the threshold h.

Whenever the ΔO/V for an interval is less than the threshold h for that interval, the interval

is considered to be a member of the group, and the GOV is updated. When an interval

ΔO/V exceeds the threshold, the group is closed, a new group is started, and the GOV for

the new group is set to represent the current interval. The GOV for the new group is

updated as additional intervals become members. In this manner, each interval in the

period is assigned to a group. Figure 3-4 provides a flowchart of the algorithm.

Figure 3-4: Single-Loop Region Growing Algorithm Flowchart

Page 40: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

26

Figure 3-5 provides a visual example of region growing, with the identified

groups colored differently. As indicated by the last column of the Excel worksheet in

Figure 3-5, four groups are identified. The first group, colored in light green, corresponds

to the SV-only group.

Figure 3-5: Interval Groups after Region Growing

3.2.2 Speed Estimation

Once the revised region growing algorithm has classified all intervals in the

period, speed estimation can be performed quite easily. Because each interval has been

classified into a group, all the intervals containing only short vehicles are in the first

group. The volume and occupancy of this group is labeled svV and svO , respectively.

The speed estimation parameter, g, is calculated in a manner similar to that of Equation

(2-2), but using the mean short vehicle length, svμ , instead of the MEVL and a sensitivity

correction parameter, β, as suggested by Wang and Nihan (2003):

( ) βμ ⋅+=

loopsv lg 80.52 (3-6)

Group

Group

Group

Group

Page 41: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

27

where loopl is the loop detector length. A simple but effective way for calibrating β is to

find a duration when traffic is free flowing. The space mean free flow speed is relatively

stable over time. It can be calculated by using samples measured by a radar gun or simply

estimated on the basis of speed limit and driving experience. By using the mean free flow

speed and measurements of intervals with only SVs in the free flow duration, β can be

calibrated through Equation (3-7),

)()()(

1loopsv

C

h sv

sv

ff

lhOhV

Cs

+⋅

⋅=

∑=

μβ (3-7)

where C is the number of intervals with only SVs in the selected free flow duration for β

calibration; h is the index of intervals with only SVs; and ffs is space-mean speed of free

flow traffic at the station.

Once β has been calibrated, the algorithm is ready to provide dynamic traffic

speed estimates for any time period. The estimated period space-mean speed, ss , is then

calculated with Equation (3-8). Equation (3-8) is similar to Equation (2-1) except that a

period SV volume and occupancy are used instead of an interval volume and occupancy.

gOT

Vssv

svs ⋅⋅

= (3-8)

3.2.2 LV Volume Estimation

Although the revised region growing algorithm produces groups in addition to the

SV-only group, there are no absolute mapping relationships between intervals in an LV-

containing group and the number of LVs in an interval. This is because region growing

depends on high membership similarity in each group to produce good results. While

Page 42: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

28

intervals with no LVs are very similar in terms of average occupancy per vehicle, those

with LVs are not. Consider an interval with one SV and one LV and an interval with ten

SVs and one LV. Although both intervals have only one LV, the average occupancies

per vehicle for each interval are quite different. Because the distance weighted Nearest

Neighbor classification algorithm developed by Wang and Nihan (2003) is well suited to

handle this situation, it was employed in this study to estimate LV volumes.

The Nearest Neighbor (NN) Decision rule is typically used to assign an

unclassified sample to one of several predefined classification categories. The distance

between the current sample and each of the predefined categories is calculated for

comparison, and the category with the smallest distance to the current sample wins, i.e.,

the current sample is assigned to the nearest category. In this case, the predefined

categories are all possible unique compositions of SVs and LVs. Because the maximal

LV volume per interval observed on I-5 in the greater Seattle area is seven, the maximal

number of predefined categories should not be more than eight. That is, for any interval k

of period j, there should be no more than eight possible vehicle compositions,

corresponding to LV numbers from zero to seven, respectively. If V(i) < 7, there are N(i)

+ 1 categories identified by LV numbers from 0 to N(i). For example, if only four

vehicles are detected in the interval (i.e., N(j) = 4), then the following five predefined

categories can be assigned to: (4 SVs, 0 LV), (3 SVs, 1 LV), (2 SVs, 2 LVs), (1 SV, 3

LVs) and (0 SV, 4 LVs).

The vehicle composition in interval k is considered an unclassified observation.

Its measurable feature is represented by its MEVL, )(kl , calculated as follows

β⋅⋅

=)()()(

kVskOkl s (3-9)

Page 43: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

29

As mentioned earlier, we assume that LV lengths and SV lengths follow the

),( 2lvlvN σμ and the ),( 2

svsvN σμ distributions, respectively. Because vehicle

composition for an interval is an independent variable, the distribution of the mean

vehicle length for a category with x LVs (where 0 ≤ x ≤ min(7, V(k)) can be determined

as ))(),(( 2 kxN xx σμ , where

)())((

)(kV

xxkVk lvsv

xμμ

μ+−

= (3-10)

)())((

)( 2

222

kVxxkV

k lvsvx

σσσ

+−= (3-11)

The similarity between the unclassified observation and the predefined category

with x LVs is measured by the distance calculated by Equation (3-12).

)()()(

)(k

klklkd

x

xloopx σ

μ−−= for x = 0, 1, ..., min(V(k), 7) (3-12)

Equation (3-12) converts variable loopk ljl −)( (mean vehicle length) into a standardized

variable (a variable that follows the N(0, 1) distribution) )(kd x , which represents the

distance to the origin. The smaller the )(kd x , the greater the probability that the current

interval contains x LVs. If

)()( kdkd xn ≤ for x = 0, 1, ..., min(Nk(j), 7) (3-13)

then we know that interval k belongs to the category that has n LVs. The LV number (n)

and the SV number (V(k) – n) can be determined correspondingly.

Page 44: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

30

3.3 ALGORITHM IMPLEMENTATION

The algorithm described in sections 3.1 and 3.2 was first implemented in

Microsoft Excel with a Visual Basic for Applications (VBA) script. This implementation

was used to develop the algorithm and produce the test results. For production uses, the

algorithm was implemented in C# and the computer application was named single-loop

Speed and Truck volume Estimator (ST-Estimator). The ST-Estimator system is a

server-client type of system that uses the service provided by the loop_client application

developed by the University of Washington (UW) Intelligent Transportation System

(ITS) Research Program. The ST-Estimator allows users to collect real-time data in

addition to archive analysis.

The loop_client application is a Unix program that disseminates lane occupancy

and volume data collected by the WSDOT loop detection systems deployed in the Seattle

area freeways (UW ITS Research Program, 1997). It posts loop detector measurements

every 20 seconds on a designated server port (by default, it uses 9004). The ST-

Estimator system connects to the loop_client server port by using the Transfer Control

Protocol (TCP) for data download. Because loop_client does not support selective

downloads, ST-Estimator takes in measurements of the most recent 20-second interval

from all loop detectors. Selection is then made to store only user-specified loop data in a

queue for real-time traffic speed and LV volume estimates.

Figure 3-6 provides a snapshot of the user interface when ST-Estimator is

launched. All available loop stations are listed for users to select. If a station is double

clicked, all loop sensors at this station are listed in the “Loop Sensors” window. A user

Page 45: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

31

can then double-click a desired loop, and that loop will show up in the “Selected Loop”

window. Only one loop can be selected at a time.

Figure 3-6: User Interface of the ST-Estimator System

Once a loop has been specified for speed and LV volume estimates, users can

click the “Run” button to actuate the “Real Time Data” window. Loop detector name,

location, vehicle count, lane occupancy, and estimated speed and LV number are shown

together with the timestamp on this window. A snapshot of this window is shown in

Figure 3-7.

System and algorithm parameters can be specified by users via a Program Settings

interface (Figure 3-8). Users can modify the values for any program parameters under

either the Basic tag or the Advanced tag of the interface. Furthermore, users can calibrate

the loop sensitivity coefficient, β , by using night-time archive data and Equation (3-7).

Page 46: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

32

With all the parameters specified, the ST-Estimator can provide speed and truck volume

estimates in real time

Figure 3-7: Real-Time Data Window of the ST-Estimator System

(a) Basic Program Settings (b) Advanced Program Settings

Figure 3-8: ST-Estimator’s Program Settings Interface

Page 47: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

33

4.0 SINGLE-LOOP ALGORITHM TESTS

4.1 TEST SITES

The single-loop speed and truck volume estimation algorithm implemented in ST-

Estimator was tested by using data from three locations along I-5 in Seattle: station ES-

167D at NE 145th Street (milepost 174.60), station ES-172R at the North Metro Base

(milepost 175.50), and station ES-209D at 156th Street SW (milepost 184.49). All three

stations are dual-loop stations, chosen so that the performance of the ST-Estimator could

be compared to actual ground truth data recorded by the dual-loop detectors. Care was

taken to select dual-loop stations that were functioning properly.

Twenty-four hour data (0:00-24:00) were collected at each station. These data are

available for download from the Transportation Data Acquisition and Distribution

(TDAD) website at the University of Washington (http://www.its.

washington.edu/tdad/tdad_top.html). Descriptive statistics of the interval volumes for

each location are tabulated in Table 4-1.

Table 4-1: Site Information and Interval Vehicle Volume Statistics

Station ES-167D ES-172R ES-209D Loop Code _MS___2 MMS___2 _MN___2 Collection Date 17-May-05 25-May-05 18-May-05 Minimum Volume 0 0 0 Maximum Volume 18 18 18 Average Volume 6.55 7.11 6.96 Std. Deviation 3.77 3.94 3.99 M-Loop Volume 28295 30719 30046 S-Loop Volume 28273 30646 29577 T-Loop Volume 26800 30646 29149 M-Loop – S-Loop 22 73 469 M-Loop – T-Loop 1495 73 897 Dropped T-Loop Vol. 1473 0 428

Page 48: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

34

4.2 TEST RESULTS AND DISCUSSION

The algorithm that directly applies single-loop measurements to Equation (2-1)

for speed estimates by using a constant g is identified as the “traditional algorithm” in

this report. The traditional algorithm and the proposed region growing algorithm were

tested against ground truth data for periods of 9, 12, and 15 20-second intervals (3, 4, and

5 minutes, respectively). Graphs of the actual versus plotted values with R2 values are

provided in Figure 4-1, and a summary of the results is provided in Table 4-2.

A perfect estimation would result in all data points forming a line of slope 1.0

starting at the origin. Therefore, data points falling under the ideal line were

underestimated speeds, and those above the line were overestimated speeds. The

proposed algorithm, based on the revised region growing concept, clearly provided

superior speed estimates.

The results of LV estimation are provided in Table 4-3. Comparisons are given in

absolute differences for the entire day. Computation of more complex error

measurements did not seem appropriate because LV volumes in general constituted less

than 10 percent of the traffic at each location and, therefore, could be considered a

somewhat “rare” event. Daily LV volume estimates were on average within 4.0 percent

of the dual-loop estimated LV volumes.

Page 49: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

35

Figure 4-1: Estimated versus Actual Speeds for Region Growing and WSDOT Algorithms with Period Lengths of 3 and 5 Minutes on Lane 2 of Southbound I-5 at NE 145th St, May 17, 2005

Page 50: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

36

Table 4-2: Summary of Speed Estimation Results

Traditional Algorithm

Wang-Nihan Algorithm

Region Growing Algorithm

Station & Loop Code

Loop Coeff. Beta

Period Length (min)

SSE SSE / Period

Average % Error SSE SSE /

Period Average % Error SSE SSE /

PeriodAverage % Error

3 38504 80 11.6% 12593 26 6.3% 11432 24 6.2% 1.01 4 24339 68 10.7% 6369 18 5.5% 7388 21 5.9%

ES-167D _MS___2

5 16471 57 10.1% 4124 14 5.1% 4741 16 5.3% 3 34421 149 11.1% 16698 36 7.2% 10109 19 6.2%

0.92 4 21293 59 10.1% 7208 20 6.1% 6275 17 5.7% ES-172R

MMS___2 5 14681 144 9.4% 4209 15 5.4% 3735 11 5.0% 3 34650 255 11.3% 15265 34 6.8% 13421 25 6.2%

1.01 4 21713 60 10.4% 9800 27 6.5% 8808 24 5.8% ES-209D _MN___2

5 15815 257 9.8% 6550 25 6.0% 5309 17 5.3%

Table 4-3: Summary of LV Volume Estimation

Station & Loop Code

Period Length (min)

Dual-Loop LV Volume

Estimated LV Volume Error % Error

3 2369 2315 -54 2.28% 4 2369 2317 -52 2.20%

ES-167D _MS___2

5 2369 2285 -84 3.55% 3 2566 2678 112 4.36% 4 2566 2683 117 4.56% ES-172R

MMS___2 5 2566 2722 156 6.08% 3 2630 2823 193 7.34% 4 2630 2760 130 4.94% ES-209D

_MN___2 5 2630 2602 -28 1.06%

Because it is true that intervals containing LVs would naturally have a higher

occupancy variance, one might question, on the basis of heteroskedasticity concerns,

whether it is valid to use R2 values as a measure of goodness-of-fit for speed estimates.

However, the region growing algorithm does not require the homoskedasticity

assumption that traditional linear regression requires. In fact, because intervals suspected

to contain LVs are explicitly treated differently, the region growing algorithm does

account for the heteroskedasticity inherent in the data. Thus, computing R2 values on the

Page 51: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

37

basis of speed estimates from the region growing model does not violate any classical

econometric assumptions.

4.3 SINGLE-LOOP ALGORITHM TEST SUMMARY

The algorithm includes a revised region growing method for speed estimation and

a method for LV volume estimation based on the Nearest Neighbor Decision rule

approach. The revised region growing method identifies data intervals that do not

contain any LVs for a particular time period. Volume and occupancy data from these

intervals, together with a g factor derived from the mean of the SV population reported

by Wang and Nihan (2003), are used to estimate average vehicle speed for the period.

This speed is then used in a distance-weighted Nearest Neighbor routine to estimate the

number of LVs present in each interval. The algorithm outperformed the traditional

algorithm even when the value for the parameter g used in the traditional algorithm was

calibrated with night-time data.

Further improvements to the proposed algorithm can be addressed in future

research studies. For example, the algorithm’s performance should be tested at the onset

and dissipation of congested periods, when the constant average speed assumption is

clearly violated. Also, the underlying vehicle distributions used in the proposed

algorithm should be tested for spatial and temporal transferability to ensure validity in

applying the algorithm to other testing locations.

Page 52: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

38

PART III VIDEO IMAGE PROCESSING FOR VEHICLE DETECTION AND CLASSIFICATION

5.0 VIDEO RESEARCH APPROACH

To better utilize video equipment available to the majority of traffic systems

management centers, we propose a simple yet effective vehicle detection and

classification algorithm that uses un-calibrated surveillance video cameras to collect SV

and LV volumes for individual roadway lanes. The research approach described here was

split into four primary categories of investigation: background extraction, vehicle

detection, shadow removal, and length-based classification. Details of each are discussed

in the following sections.

5.1 BACKGROUND EXTRACTION

Typically, a computer vision-based detection system requires a background image

that represents the base state of the area under observation. In the case of traffic

detection, it is rarely possible to obtain an image of the observation area that does not

contain any vehicles or other foreground objects. Therefore, it is necessary to extract the

background image from the video stream itself. This is accomplished in an iterative

fashion by using the pixels that make up an image. A grayscale image has only one value

for each pixel that ranges from 0 and 255. A color image uses three color channels to

represent a pixel’s color. These three channels in the RGB color space are the Red

channel (R), the Green channel (G), and the Blue channel (B). Each channel has a value

from 0 through 255 that represents the amount of that color. When the median

background extraction algorithm is applied to a color image, the median value of each

color channel needs to be found for each color pixel. The intensity (or luminance) of a

Page 53: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

39

color pixel is the value of grayscale converted from the R, G, and B color values by using

Equation (5-1) (Shapiro and Stockman, 2001).

Grayscale = R * 0.30 + G * 0.59 + B * 0.11 (5-1)

In this current study, the background image was obtained by constructing an

image of the median value of each pixel from a collection of images:

[ ]( )nimgmedianbgd jiji ,, = (5-2)

where: jibgd , is the background image pixel value

jiimg , is an array of image pixel values n is the number of pixel values in the array.

In this study, we used a frame rate of 12 frames per second (fps) for video image

processing. To extract the background image, 15 images spaced 20 frames apart were

employed. By using the median value, it was assumed that the background was

predominant in the image sequence. Figure 5-1 shows a snapshot of a video scene and the

extracted background image for that scene. For data collections in locations with higher

volumes (which would tend to obscure the background to a greater degree), a background

extraction based on the mode of each pixel would be preferable (Zheng et al., 2006). In

high volume and congested situations where portions of the background are never visible,

more advanced background estimations might be required.

Page 54: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

40

(a) A Snapshot of a Video Scene (b) Extracted Background

Figure 5-1: An Example Video Scene and Its Background

5.2 VEHICLE DETECTION

One potential disadvantage of using the background subtraction technique for

detecting vehicles is that because the background is not updated frequently, it does not

account for rapid lighting changes in the scene (Cucchiara et al., 2003). Such effects are

often caused by the entrance of a highly reflective vehicle, such as a large white truck,

into the scene. Before vehicles can be detected, these environmental illumination effects

must be accounted for. In this current study, correction for environmental illumination

effects was accomplished by using an automatic gain control (AGC). The AGC is a

rectangular area that was placed in a part of the scene where the background was always

visible (i.e., no vehicles were passing over the area). Thus, any changes in pixel

intensities could be assumed to be due to environmental effects, since no physical objects

had traversed the area. The average intensity change over this area from the background

image could be determined and applied to the entire image to improve accuracy and

avoid false vehicle detections:

Page 55: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

41

agc

Ajiji

A

ntimintbginti agf

∑ −

)( ,,

(5-3)

where: ntiΔ is the average intensity difference over the AGC area

agcA is the area of the AGC in number of pixels

jintbgi , represents pixel intensity in the background image on the interval [ ]1,0

jintimi , represents pixel intensity in the foreground (current) image on the interval [ ]1,0 .

Vehicle detection was then performed with virtual detectors drawn by the user

over the program scene. Each virtual detector consisted of a registration line, a detection

line, and a longitudinal line, as illustrated in Figure 5-2.

Figure 5-2: The Components of the Virtual Detector Our vehicle detection algorithm first inspected for vehicles on the registration

line:

{ } ntintimintbgidlinepp jijijijiji Δ−−=∈ ,,,,, :: (5-4)

where: jip , represents a pixel location

line represents the set of all pixels on the registration line jid , is the differenced pixel intensity on the interval [ ]1,0 .

Registration

Detection

Longitudinal

Page 56: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

42

We then defined a set C that contained all differenced absolute pixel intensities

greater than some threshold, t (in this study, a difference of 0.05 was used):

{ }tddC jiji >= ,, : (5-5)

If more than 30 percent of the members of a set line were also contained in the set C ,

we considered the line to be occupied by a vehicle. To present this fact graphically to the

user, the color of the registration line was changed from green to magenta as a visual cue

after each detected vehicle.

Once detected, each vehicle was processed and classified in one of two ways:

entrance detection or exit detection. In entrance detection, the vehicle is already over the

detector when it crosses the registration line, and processing occurs when the registration

line is first occupied; that is, no vehicle was present over the line in the previous frame.

In exit detection, the vehicle is fully in the detector as it is leaving the registration line,

and the vehicle is processed upon exiting the registration line; that is, a vehicle was

present over the line in the previous frame.

When a vehicle was processed, the detection line was inspected for differences in

a manner similar to that of the registration line inspection. In this case, however, the set

C of locations of absolute pixel intensities greater than t were used as seed points for

obtaining the vehicle region, Reg, the set of all simply connected pixels satisfying the set

membership rule for C without the requirement that they lie on the line. The bounding

box was then the rectangular coordinates that represented the minimum and maximum

coordinates of the pixels in region Reg. Computation of the bounding box localized the

area of interest and improved the computational efficiency of the algorithm. The area

Page 57: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

43

within the bounding box was then passed on to the shadow removal routine and length

classifier.

5.3 SHADOW REMOVAL

The bounding box generated by the vehicle detection step often includes any

shadow associated with the vehicle of interest. In these cases, it is often necessary to

remove this shadow to obtain the true vehicle dimension and avoid introducing bias into

the length estimates. Below are presented investigations of several shadow removal

algorithms, as well as the algorithm employed in this study.

5.3.1 Experiments of Several Shadow Removal Algorithms

To design a high performance shadow removal algorithm, several methods were

investigated to perform shadow removal in real time. The first method involved a dual-

pass application of the Otsu automatic thresholding method (Otsu, 1979) to the intensities

of the detected foreground pixels. The first application of the Otsu method separated the

pixels into high and low intensity populations. The high intensity population was

considered to be the vehicle of interest. However, the low intensity population might

have consisted of both shadow pixels cast by the vehicle as well as self-shadow areas on

the vehicle that were hidden from direct illumination sources. To separate these two

areas, a second thresholding of the lower intensity population was performed. Those

pixels above the resultant threshold were considered to be self-shadow pixels, while those

pixels occupying the absolute lowest pixel range were considered to be the cast shadow.

One of the primary advantages of this method is that it is computationally

inexpensive. Figure 5-3 illustrates that although the method performed well for bright

vehicles with dark cast shadows, it did not perform as well when darker vehicles were

Page 58: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

44

considered. Notice that in the latter case, self shadow regions of the pickup truck were

classified as shadow and subsequently replaced by pixels from the background.

Although this problem could be mitigated by allowing only points connected to the

exterior of the vehicle to be classified as shadow, the algorithm also did not perform well

when the cast shadow was not uniform and not as dark. This is illustrated in Figure 5-4,

which was taken when the altitude of the sun was low, resulting in shadows that may

have occupied higher intensity ranges than the vehicle itself.

Figure 5-3: Otsu Method for Shadow Removal on a Bright Vehicle and a Dark Vehicle

A second attempt at shadow removal applied a region growing method to identify

the shadow area. Because shadow areas are more homogeneous in terms of intensity than

most vehicles, we tried a region growing method to identify the shadow region cast by a

vehicle. In this method, the direction of the shadow was determined to identify beginning

seed points for growth of the shadow. The seed characteristics were extracted from a

sample shadow region specified by the user. This method performed very well when the

shadows were very dark or on the asphalt pavement without remarkable cracks (Figure 5-

5). However, when the shadows occupied the relatively higher intensity ranges in low-

Page 59: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

45

angle illumination on aged concrete pavement, the region growing approach did not

perform well (Figure 5-6).

(a) Original Image (b) Image after Shadow Removal

Figure 5-4: Otsu Method for Shadow Removal with a Non-Uniform Cast Shadow

(a) Original Image (b) Image after Shadow Removal

Figure 5-5: A Successful Example of the Region Growing Shadow Removal Method

Page 60: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

46

(a) Original Image (b) Image after Shadow Removal

Figure 5-6: An Unsuccessful Example of the Region Growing Shadow Removal Method

5.3.2 Design of a Combined Shadow Removal Algorithm

The final approach for shadow removal used in this study was based on

identification of the shadow area in the subtracted edge image of the foreground from the

background. The first step in this approach was to determine the relative position of the

shadow to the vehicle. An easier way to specify this is through a user’s interactive input.

The calibration user interface of the VVDC system provides users choices of shadow

position relative to a vehicle: a vehicle’s shadow is located at its (1) upper left, (2) left,

(3) lower left, (4) upper right, (5) right, or (6) lower right. A user can select the relative

position of shadow on the calibration user interface.

Another more general method is to determine the shadow position based on the

position of the sun. This is possible because during daytime detection any cast shadows

can be assumed to be generated by the sun. The position of the sun can be calculated by

knowing the time of day and the approximate latitude and longitude of the location under

study. The method used by Gronbeck (2004) was applied in this study. Once the sun’s

Page 61: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

47

location was known, the pan angle of the camera view with respect to due north was all

that was necessary to calculate the direction of the shadow in the image:

2πθθα +−= azimc (5-6)

where: α = the image shadow angle in radians counterclockwise from the positive

x-axis; cθ = the camera pan angle in radians counterclockwise from due north; and

azimθ = the sun azimuth angle in radians counterclockwise from due south.

The next step was to produce an edge image of the vehicle by using the method

developed by Canny (Canny, 1986):

)()( pxqpxqpxq bgdCannyimgCannyedge −= (5-7) where:

)(ICanny = the Canny edge image of image I

pxqimg = the foreground image framed by the bounding box

pxqbgd = the background image framed by the bounding box

pxqedge = the difference edge image of foreground and background.

The background edge image was subtracted from the foreground image to

eliminate edges present within the bounding box that were persistent in the background.

The edge image was then dilated once to close any gaps in the edge lines. Dilation,

erosion, closing, and opening are mathematical morphology operations used in image

processing. More details about these operations are available in Shapiro and Stockman

(2001). Figure 5-7 illustrates the resulting dilated edge image, along with images from

intermediate steps.

Page 62: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

48

(a) A Foreground Image (b) The Corresponding Background Image

(c) The Edge Image of the Foreground (d) The Edge Image of the Background

(e) The Subtracted Edge Image (f) The Dilated Edge Image

Figure 5-7: Sample of Edge Imaging (Assuming the Bounding Box Includes the Entire Image)

Page 63: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

49

The shadow location in the image was found in the following manner. First, a

closing morphological operation was performed on the binary image to fill in holes in the

vehicle region. The centroid of the binary mass was then found:

⎟⎟⎟

⎜⎜⎜

⎛=

∑∑A

p

A

pcentroid A

yA

x

, (5-8)

where: xp = the x-coordinate of a pixel in Reg

yp = the y-coordinate of a pixel in Reg A = the pixel area of the region Reg.

A line was then created from the centroid in the direction of the shadow angle.

The point of intersection between this line and the outer edge of the region Reg was the

point where the shadow was assumed to exist. The corresponding point in the edge

image was used as a seed, and the shadow region, S ,was formed from the collection of

all eight-connected points (in a 3×3 mask, not on an edge line). The binary region, V ,

representing the true vehicle could now be formed by subtracting the shadow region, S ,

from the region Reg:

SRV −= (5-9)

A binary morphological opening operation was performed on the region V to

remove any lingering loop around the shadow region. Figure 5-8 shows an example of a

detected truck before and after shadow removal.

Page 64: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

50

(a) Before Shadow Removal (b) After Shadow Removal

Figure 5-8: An Example of a Detected Truck Before and After Shadow Removal

5.4 LENGTH-BASED CLASSIFICATION

As mentioned in Section 3.1, the frequency distribution of vehicle lengths clearly

indicated a bimodal distribution with two distinct peaks, one higher peak centered at

about 18 ft (5 m), representing the concentration of SV lengths, and the other centered at

about 74 ft (23 m) representing LV lengths. Note that the distributions were split at 40 ft

(12.2 m), which corresponds to the boundary between bins 2 and 3 of the WSDOT

vehicle classification system outlined in Table 1-1.

Trucks normally constitute less than 20 percent of traffic for major roadways in

Washington State (WSDOT, 2002), which indicates that most of the detected vehicles on

freeways are SVs. If a sufficient number of vehicle lengths are collected, the bimodal

distribution can be applied to separate vehicles into SV and LV groups on the basis of

relative length comparisons between vehicles. Relative comparisons to determine vehicle

classification have been proven effective by Wang and Nihan (2003) in developing more

accurate speed estimation for single-loop detectors. The method can be extended further

by using the apparent pixel-based length of vehicles rather than the physical length. This

Page 65: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

51

was feasible in the current study because the only desire was to classify vehicles by

length, and it was not necessary to know the actual length of each vehicle, as long as it

was properly classified. As soon as a vehicle exits the registration line, the length

algorithm merely moves along the longitudinal line, counting the number of pixels as the

pixel-based length of the vehicle. This ensures that the lengths of all the vehicles in a

lane are measured at almost the same starting point, so that the measured vehicle lengths

are comparable. Note that these are relative lengths, and a particular length measurement

does not represent the actual length of the vehicle. In this manner, vehicles can be

separated by length and classified without requiring camera calibration. This increases

the flexibility and attractiveness of this mobile traffic detection system.

In the application, pixel-based vehicle lengths for each vehicle were obtained

once the shadow had been removed from the vehicle. This vehicle length was simply the

length along the longitudinal line that was occupied by the vehicle region V :

( ) ( )22yyxx seselen −+−= (5-10)

where: yx ss , = the start coordinates of the line

yx ee , = the end coordinates of the line len = the pixel-based length of the vehicle.

The pixel-based length of each vehicle was then compared with a threshold value

to determine whether it belonged to the SV category or the LV category. Because a

vehicle looks different in cameras with different lens and posture settings, the threshold

value could not be a universal predetermined value.

The threshold value for each lane can be specified by users with the interactive

interface of the VVDC system. The length of the longitudinal line of each virtual loop

serves as the threshold. To specify the threshold accurately, a user can wait until a

Page 66: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

52

representative vehicle shows up and then use the vehicle as a reference to draw the

longitudinal line. Vehicles longer than the longitudinal line are assigned to the LV

category. Specifying the length threshold this way provides users with the flexibility to

collect classified vehicle volumes of desired lengths.

Page 67: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

53

6.0 DEVELOPMENT OF THE VIDEO-BASED VEHICLE DETECTION AND CLASSIFICATION SYSTEM

This section describes the development of the VVDC system that implements the

algorithm presented in Chapter 5.

6.1 SYSTEM ARCHITECTURE

The VVDC system comprises a WinTV-USB device (details for this device are

available at http://www.hauppauge.com/html/usb_data.htm) and a personal computer. It

is designed for both online and offline operations on a regular personal computer running

Windows 2000 or Windows XP. The personal computer used for VVDC system

development was a Dell Latitude D600 laptop computer with a Pentium M 1.6 GHz

Central Computing Unit, 1-GB memory, and a 32-MB ATI Radeon 9000 video card. It

ran the Windows XP Professional operating system.

The WinTV-USB device is used for digitizing live video signals. When the

VVDC system is executed offline, it reads digitized video images from a storage media,

and the WinTV-USB device is not necessary. For online operations, the VVDC system

reads real-time images from the WinTV-USB device from the location where a live video

signal source is connected. The live video source can be a video cassette player or a video

camera. The components of the VVDC system and possible video data sources are shown

in Figure 6-1.

Page 68: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

54

Video Data SourceWinTV USB

CardComputer

Video Datalink

VVDC

Video Data SourceWinTV USB

CardComputer

Video Datalink

VVDC

Figure 6-1: Components of the VVDC System

The software component of the VVDC system was written in the Microsoft

Visual C# programming language. It has six modules: a live video capture module, a

user input module, a background extraction module, a vehicle detection module, a

shadow removal module, and a length-based classification module. The relationships

among these modules are illustrated in Figure 6-2. Details of each module are described

in the following sections.

Page 69: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

55

Extracted background

Nth Image … 2nd Image 1st Image

Background Extraction Queue

Find the median of color values for each pixel

Count the vehicle

No

New Frame

Vehicle Detection Module

Background Extraction Module

Detection line occupied?

Yes

Vehicle registered?

Yes

NoVide capture

Live video

USB Port

Live Video Capture Module

Imagemedia

Get Bounding Box

Shadow Removal Module

Compute Centroid

Shadow Removal

Edge Detection

Count long vehicle

Length-Based Classification Module

Pixel-based length

NoLong Vehicle?

YesLV threshold

Virtual detector

User Input Module

Shadow sample

Extracted background

Nth Image … 2nd Image 1st Image

Background Extraction Queue

Find the median of color values for each pixel

Count the vehicle

No

New Frame

Vehicle Detection Module

Background Extraction Module

Detection line occupied?

Yes

Vehicle registered?

Yes

NoVide capture

Live video

USB Port

Live Video Capture Module

Imagemedia

Get Bounding Box

Shadow Removal Module

Compute Centroid

Shadow Removal

Edge Detection

Count long vehicle

Length-Based Classification Module

Pixel-based length

NoLong Vehicle?

YesLV threshold

Virtual detector

User Input Module

Shadow sample

Figure 6-2: Flow Chart of the VVDC System

6.2 LIVE VIDEO CAPTURE MODULE

Because the image stream from each WSDOT camera location is multiplexed and

transmitted via a fiber cable to the Traffic Systems Management Center (TSMC) for real-

time traffic observation, control, guidance, and management, only an extension from the

TSMC to the Smart Transportation Applications and Research Laboratory (STAR Lab) at

the University of Washington was necessary. This link was established in September

2004 and enables access to any of the WSDOT surveillance cameras in the Puget Sound

region. Only one video sequence can be transmitted over the feed at a time. A switch

program developed by Dr. Dan Dailey in the Department of Electrical Engineering at the

University of Washington is employed to switch the camera for display over the feed.

Although this live video could be digitized into frames as the recorded video was, a goal

Page 70: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

56

of this project was to use raw live video input directly in the program. This was

accomplished via the live capture module.

As a plug and play system, the VVDC system has a live video capture module to

digitize video images in real time for online applications. Live video signals can come

from a surveillance video camera or a video cassette player. A WinTV-USB card

produced by Hauppauge Digital, Inc. was used to implement this module. The WinTV-

USB card is connected to a personal computer through a Universal Serial Bus (USB) port

at one end and to the live video source at the other end through an S-video or composite

adapter. This standard portable video device has a 125-channel, cable-ready TV tuner and

is widely used for Internet video conferencing, TV-viewing, and image capture

applications on computers. It is available at local electronic stores for approximately

$100. The built-in features of the WinTV-USB device include video capture at variable

rates and in different formats, adjustable image size, and changeable color configurations.

The live video capture module can produce digitized image streams in either the

Joint Photographic Experts Group (JPEG) or the bit-mapped graphic (BMP) format and

supports a capture rate of up to 30 frames per second (fps). Captured video images can be

provided to other modules of the VVDC system for online analysis or can be stored in a

folder for offline processing. For most roadway applications, 10 fps is sufficient to track

vehicle movements. The VVDC system processes video frames at 12 fps for vehicle

detection and classification, although our tests indicated that the system speed is able to

handle 20 fps in real time. The image size used for the VVDC system is 320 by 240

pixels.

Page 71: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

57

Although the WinTV USB device was chosen for capturing live video images, the

VVDC system does not contain any code specific to this device. This means that the

VVDC system can use any video capture devices supported by the Windows 2000 or

Windows XP operating systems. The live video capture module uses Microsoft DirectX

technology (Microsoft Inc., 2002) to search video sources connected to the computer.

Once a video source has been identified, the VVDC system polls the digitized images at

12 fps by default. When more than one video source is available, the image stream from

the first USB port is used as the default.

6.3 USER INPUT MODULE

Figure 6-3 illustrates the main user interface of the VVDC System, which consists

of the current frame for analysis; the controls for folder selection, starting, stopping, and

resetting the program; and an output window of detector data collected. Users can

choose to open a directory where static images are stored for analysis or to proceed with

live video processing, which assumes an available digitized video source on the

computer. The field of view needs to be configured before data collection can proceed.

The VVDC system provides interactive input functions for configuration. A snapshot of

the interactive configuration interface is shown in Figure 6-4.

Page 72: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

58

Figure 6-3: The Main User Interface of the VVDC System

Page 73: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

59

Figure 6-4: The Interactive Configuration Interface

The configuration process involves drawing the desired detectors, specifying the

vehicle length threshold, selecting the AGC area (light filter box) and sample shadow

zone, and specifying the relative position of shadow to a vehicle. Normally, one detector

is needed for each lane. Detectors should be drawn at locations where vehicles are clearly

visible with minimal occlusion problems. The longitudinal line for a detector serves as

the threshold for separating SV and LV categories. The AGC box should be drawn at a

location free of shadows and moving objects. A user can select a sample shadow zone

that the VVDC system can use to extract statistical features of the current shadows and to

Page 74: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

60

specify a vehicle’s relative location to its shadow to facilitate the shadow removal

process. As an alternative, a user can input the rough latitude and longitude of the data

collection location and the pan angle of the camera view with respect to due north for the

system to use in calculating the direction of the shadow. By default, the VVDC system

uses the latitude and longitude of Seattle as the location for data collection.

Configurations may also be saved and loaded to prevent having to reconfigure the same

site twice. Once the site has been configured, the program is ready to perform data

collection tasks.

6.4 BACKGROUND EXTRACTION MODULE

To run properly, the VVDC system requires a good quality background, at least at

the virtual detector locations. The background extraction module used in the VVDC

system employs the median background-extraction method described in Section 5.1.

Depending on the roadway condition, it may take up to a minute to run the algorithm. To

display the background extraction process visually to users, the module employs a task

bar. Figure 6-5 shows a snapshot taken in the middle of a background extraction process.

The green portion of the lower right bar indicates the percentage of the background

extraction task completed.

Page 75: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

61

Figure 6-5: Background Extraction Process

After the background has been extracted, it can be viewed by using the View

menu. Figure 6-6 shows the background extraction result from the process shown in

Figure 6-5. If the extracted background image is not acceptable, the extraction process

can be repeated until an acceptable background image is composed. A background image

can be saved for future use.

Page 76: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

62

Figure 6-6: Extracted Background Image

6.5 VEHICLE DETECTION MODULE

The processing loop of the VVDC system is based upon a timer that operates at a

frame processing rate of 12 fps. During each cycle, a new image is obtained from a

specified source. If the background image is not compiled, the image is passed to the

background extraction module for background extraction. Otherwise, each image is

processed according to the algorithm presented in Section 5.2 for vehicle detection.

The VVDC system detects vehicles at locations of virtual detectors. A virtual

detector comprises a registration line (green), a detection line (blue), and a longitudinal

line (green yellow). The distance between the registration line and the detection line

must be shorter than the length of a short car and longer than the distance of vehicle

movements between two consecutive frames.

The vehicle detection module scans the registration line and the detection line for

every frame and compares the pixel values on the two lines to the corresponding

background pixels. If the number of non-background pixels on the line exceeds a

Page 77: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

63

threshold, then the line is marked as “on,” which indicates that a vehicle is detected.

When a vehicle is detected at the registration line, the color of the registration line

changes to magenta so that users are aware of the fact that a vehicle has been detected.

Similarly, when the detection line detects a vehicle, its color changes to yellow. A

vehicle will not be counted until it exits the registration line and occupies the detection

line. Such detection logic is designed to avoid over counting vehicles from minor camera

vibrations and other video noises. Once the detection line has been occupied, the system

keeps monitoring the registration line until the vehicle passes it. Then the VVDC system

calls the shadow removal module to eliminate the cast shadow of the vehicle and the

vehicle classification module to measure its pixel-represented length. Figure 6-7 shows a

snapshot of the system when a vehicle is detected and classified. The program provides

visual cues to users when vehicles are detected and processed. The program also logs the

information of each detected vehicle in a text file.

Page 78: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

64

Figure 6-7: A Snapshot of the VVDC System When a Vehicle is Detected and Classified

6.6 SHADOW REMOVAL MODULE

Vehicle detection based on background subtraction can identify moving blobs on

a video scene. Figure 6-8 shows an example image frame and its moving blobs detected

by background subtraction. We can see that a vehicle’s moving blob contains both the

vehicle area and its cast shadow area. Without removing the shadow area, a vehicle’s

pixel-represented length may be exaggerated. More importantly, a lengthened shadow

extended to adjacent lanes may cause false alarms and result in over counting problems.

Page 79: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

65

Therefore, the shadow removal module is an important component in the VVDC system

for accurate vehicle detection and classification.

(a) A Video Scene (b) Moving Blobs

Figure 6-8: Detection of Moving Blobs through Background Subtraction

The shadow removal module implements the algorithm described in Section 5.3.

Although the authors employed a relatively simple algorithm for shadow identification

and elimination, this process is still computationally expensive because of the edge

detection and morphological operations involved. In the current implementation, the

image region where edge detection and morphological operations are applied is limited to

the area in each bounding box. This cuts down the computational time significantly and

enables online applications of the VVDC system. Figure 6-9 illustrates the shadow

removal process for the white van shown in Figure 6-8. If the vehicle region is smaller

than a given threshold value after shadow removal, then the moving object will not be

counted as a vehicle. After shadow removal, the vehicle region is passed to the vehicle

classification module for length measurement and classification.

Page 80: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

66

Figure 6-9: A Step by Step Illustration of the Shadow Removal Process (from left to right): (a) Original Image; (b) Bounding Box Area (Shown in Blue); (c) Detected Edges; (d) Shadow Identification; and (e) Shadow Removal.

6.7 VEHICLE CLASSIFICATION MODULE

The VVDC system classifies a vehicle into the SV or LV category on the basis of

its pixel-represented length. After shadow removal, a moving object contains only its

vehicle region. To calculate a vehicle’s length in number of pixels, the intersecting points

of the longitudinal line with the front and rear edges of the vehicle are needed. To find

the intersecting points, a 5×3 mask is used to search along the longitudinal line. If nine of

the fifteen pixels in the mask are non-background pixels, then the center point of the

mask is considered to be on the vehicle body. The search starts from the crossing point of

the longitudinal line and the detection line and ends when both the front and rear edges of

the vehicle are found. Then a red line indicating the detected vehicle length is drawn

within the bounding box to visually show the calculated vehicle length. If a vehicle’s

pixel-based length is longer than the LV threshold, the vehicle is assigned to the LV

category. Otherwise, it is considered to be an SV. A snapshot of the VVDC system

showing a calculated vehicle length is in Figure 6-7.

Page 81: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

67

7.0 VVDC SYSTEM TESTS AND DISCUSSION

7.1 TEST CONDITIONS AND DATA

Because of the time constraints on this research project, the video image

processing algorithm developed in this study has two limitations. First, it was designed

for daytime detections only; the algorithm will not work under nighttime conditions.

Second, the algorithm assumes that the space-headway is sufficient to prevent any

vehicle pair from longitudinal occlusion. This means that the algorithm will not produce

good results under congested conditions or when the camera angle is so flat that it

produces significant longitudinal occlusion problems. Consequently, the system tests

described below had to be performed under relatively restricted conditions.

The system tests were divided into two parts: two offline tests with archived video

images and one online test with live video data. The two data sets for the offline tests

were collected from different locations: one from southbound I-5 near the NE 145th

Street over-crossing, shown in Figure 7-1, and the other from northbound SR 99 near the

NE 41st Street over-crossing, shown in Figure 7-2. Both data sets were taken by a Canon

L2 8-mm video camcorder. The I-5 test videotape was recorded between 11:30 AM and

12:30 PM on June 11, 1999. The SR 99 test videotape was taken from 4:00 PM to 5:00

PM on April 22, 1999. Twelve-minute video clips were extracted from both video tapes

and digitized at a rate of 12 fps, resulting in 8,640 frames in each offline test data set.

Page 82: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

68

Figure 7-1: Southbound I-5 Near the NE 145th Street Over-crossing

Figure 7-2: Northbound SR 99 Near the NE 41st Street Over-crossing

Online test data came from the live video feed link between the TSMC at the

WSDOT and the STAR Lab at the University of Washington, shown in Figure 7-3 and

introduced in Section 6.2. The camera selected for online testing was the camera

shooting southbound I-5 near the NE 92nd Street over-crossing. A snapshot of this

location is shown in Figure 7-4.

Page 83: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

69

Figure 7-3: Live Video Display at the STAR Lab

Figure 7-4: Southbound I-5 Near the NE 92nd Street Over-crossing

7.2 OFFLINE TESTS

7.2.1 The I-5 Test Location

Given the camera location and the traffic volume at this site, vehicle occlusion

was rare. Although the weather was sunny, the time of day during which the video

stream was taken resulted in shadows that tended not to stray into other lanes. Thus, this

image set provided an ideal test condition.

Page 84: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

70

Table 7-1 shows the results of the VVDC system evaluation at this site, including

manually observed results (ground-truth data), system operation results, and comparisons

between the two. There was an overall detection error of only 1.06 percent, and trucks

were properly identified approximately 94 percent of the time. These test results

illustrated an encouraging performance by the VVDC system. Note, however, that even

though the system test results for truck classification on lane 1 (the right-most lane) were

the same as the observed results, this fact did not reflect perfect performance of the

system. Comparisons to ground-truth data indicated that the system produced two

mistakes: one truck was missed (a false dismissal), while another was double-counted (a

false alarm). A brief summary of the system errors is provided in the footnotes of Table

7-1. Further error investigations were conducted manually, and these findings are

described in Table 7-2.

Table 7-1: Offline Test Results from the I-5 Test Location

Location: Southbound I-5 near the 145th Street Over-crossing Time Period 12 minutes Lane 4 Lane 3 Lane 2 Lane 1 Subtotal

# of Trucks 5 4 37 12 58 Ground-

Truth Total # of Vehicles 149 409 335 244 1136

# of Trucks 5 4 35 12 56 System

Detected Total # of Vehicles 154 412 335 245 1146

# of Trucks 0a 0b 0 0 2 5.41% 2c 16.67% 4 6.89%

Error Total # of Vehicles 5 3.36% 3 0.73% 0 0 3d 0.82% 11 1.06%

a absolute error, b relative percentage error, c one was missed and one was over-counted. d two cars missed and one truck over-counted.

Page 85: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

71

Table 7-2: Error Causes for the Offline Test at the I-5 Test Location

Lane Error descriptions Explanations Lane 1 1. One truck missed

2. One truck over-counted 3. Two vehicles missed

1. Same reason as that for Lane 2. 2. A truck that occupied both Lane 1 and Lane 2 was counted by both the Lane 1 and Lane 2 detectors. A snapshot of this truck is shown in Figure 7-5. 3. Two lane-changing vehicles did not trigger any of the two virtual loops. See the black car in the lower right corner of Figure 7-6 for example.

Lane 2 Two trucks missed The two false dismissals were due to the fact that significant portions of the colors of the two trucks were too similar to the background color to have their lengths properly measured. Figure 7-7 shows one of the two trucks to illustrate the problem.

Lane 3 Three vehicles over-counted

Lane 4 Five vehicles over-counted

Both Lane 3 and Lane 4 had false alarms. These false alarms were likely caused by the reflection of vehicle head lights from Northbound I-5 traffic.

Figure 7-5: A Truck Triggered Both Lane 1 and Lane 2 Detectors

Page 86: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

72

Figure 7-6: A Lane-Changing Vehicle Missed by the VVDC System

Figure 7-7: A Misclassified Truck with the Color of the Bed Similar to the Background Color

7.2.2 The SR 99 Test Location

The late afternoon sun at this location caused significant cast shadows. This test

data set was used to examine the adaptability and reliability of the VVDC system under a

challenging situation. This testing location included three lanes on northbound SR 99.

Vehicle shadows sometimes projected onto adjacent lanes, increasing the possibility of

spurious vehicle counts. Additionally, at this location traffic flow was interrupted

Page 87: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

73

periodically by signal control at the upstream intersection. Although the camera at this

site was set reasonably high, periodic heavy traffic flow could also generate unexpected

longitudinal occlusions. These factors hindered VVDC system performance.

The traffic condition at this site was representative of a more complex scenario.

Selection of this site, therefore, enabled a performance evaluation of the VVDC system

under complicated conditions. Table 7-3 summarizes the test results in the same manner

as those presented in Table 7-1. The overall results were satisfactory, given that the test

conditions were challenging. During the testing period the overall count error was less

then 0.41 percent, and more than 93 percent of the trucks present were correctly

recognized. The error details are provided in Table 7-4.

Table 7-3: Offline Test Results from the SR 99 Test Location

Location: Northbound SR 99 near the NE 41st Street Time Period

12 minutes Lane 3 Lane 2 Lane 1 Subtotal

# of Trucks 8 7 15 30

Ground-Truth Total # of Vehicles 270 244 192 706

# of Trucks 7 6 15 28 System

Detected Total # of Vehicles 270c 245 194 709

# of Trucks 1a 12.5%b 1 14.28% 0 0 2 6.67%

Error Total # of Vehicles 2 0.74% 1 0.41% 2 1.04% 5 0.41%

a absolute error, b relative percentage error, c one vehicle missed and one over-counted.

Page 88: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

74

Table 7-4: Error Causes for the Offline Test on SR 99

Lane

Error descriptions

Explanations

Lanes 1-3 1. Four false alarms 2. One false dismissal

1. The four false alarms were caused by the reflections of sunlight on vehicle bodies. 2. A car that ran on the right shoulder did not trigger the detector on Lane 1. See Figure 7-8 for a snapshot of the vehicle.

Lane 2 and lane 3

Two trucks missed.

The false dismissals occurred because significant portions of the colors of two trucks were too similar to the background to have their lengths properly measured.

Figure 7-8: One Vehicle Driving on the Shoulder Did Not Trigger the Detector

7.3 ONLINE TEST

An online test was conducted with live video from the surveillance camera

installed at southbound I-5 near the NE 92nd Street over-crossing. Live video signals

were transmitted to the test computer via a fiber cable link between the WSDOT TSMC

and the STAR Lab. Selection of this site enabled us to examine the robustness and

reliability of the VVDC system when applied to live video images generated from a

Page 89: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

75

typical surveillance camera. In comparison to an ideal test condition, the image quality

of this data set was seriously affected by low-intensity rain and wet pavement. Although

the image displacements resulting from the camera’s vibration could be ignored, the

moving objects were very small relative to the field of view. Additionally, reflections of

vehicle lights on wet pavement became another remarkable source of disturbance.

Therefore, this test was more challenging than the two offline tests described earlier.

The online test results for this location are summarized in Table 7-5. The overall

accuracy of the vehicle count was 97.73 percent, and the truck count accuracy was 91.53

percent. The performance of the VVDC system was slightly lower in this online test than

the offline tests. However, given that the test conditions were more complicated and

challenging, the accuracy levels achieved during this online test were deemed satisfactory.

In-depth reasons for the causes of each detection error are summarized in Table 7-6.

Table 7-5: Results of the Online Test at Southbound I-5 Near the NE 92nd Street Over-

crossing

Location: Southbound I-5 near the 92nd Street Over-crossing Time Period 12 minutes Lane 4 Lane 3 Lane 2 Lane 1 Subtotal

# of Trucks 13 36 5 5 59

Ground-Truth Total # of

Vehicles 388 378 380 170 1316

# of Trucks 14 37 6 5 62 System

Detected Total # of Vehicles 397 387 389 173 1346

# of Trucks 1a 7.69%b 3c 8.33% 1 20% 0 0 5 8.47%

Error Total # of Vehicles 9 2.31% 9 2.38% 9 2.36% 3 1.76% 30 2.27%

a absolute error, b relative percentage error, c one truck missed and two trucks double counted.

Page 90: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

76

Table 7-6: Error Causes for the Online Test on Southbound I-5 Near the NE 92nd Street Over-crossing

Lane Error descriptions

Explanations

Lanes 1-4 25 cars missed Several lane-changing vehicles were missed at this location because they did not trigger any detectors. See Figure 7-9 for an example of such a vehicle. Our manual investigation also found that several vehicles were not detected when they passed over the virtual loops. This was probably because of random noises and the similarity of the vehicle colors to the background color. The camera at this location was mounted high to monitor all the 11 lanes. Consequently, vehicle regions were relatively small in comparison to those at other test locations. This made the detection more sensitive to random noises.

Lane 2 One container truck was missed

Please see Case 1 of Lane 3 for the cause of this error

Lane 3 1. One truck was missed 2. Two trucks were over-counted

1. The misclassified container truck is shown in Figure 7-10. The two containers were separated by a significant distance which prevented the length calculation algorithm from finding the front edge of the vehicle. 2. The VVDC system double counted two trucks because of the longitudinal occlusion of vehicles. Two separated SVs were detected as one LV. See Figure 7-11 for an example scenario of this problem.

Lane 4 One truck was missed

The error was caused by the similarity of the truck’s appearance and the wet pavement. The system only recognized part of the truck and misclassified it as an SV.

Figure 7-9: A Lane-Changing Car Was Missed

Page 91: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

77

Figure 7-10: A Gas Tanker Was Misclassified Because of the Large Distance between

the Two Containers

Figure 7-11: Truck Over-Counts Due to Longitudinal Occlusion

7.4 VVDC SYSTEM TEST SUMMARY

Evaluation results from the three test locations were encouraging. The accuracy of

vehicle counts was above 97 percent for all three tests. The effectiveness of the video

Page 92: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

78

image processing method developed in this study was demonstrated. Both false alarms

and false dismissals were found in the tests. False alarms in vehicle detection were

mainly caused by wet pavement reflection. False dismissals were largely due to lane-

changing vehicles or to vehicles driving on the shoulder without triggering the virtual

sensors.

The accuracy of vehicle classification was lower than that of vehicle detection but

was still in the acceptable range. The total truck count error was lower than 9 percent for

all three tests. Two major causes of vehicle classification errors were longitudinal

occlusion and inaccurate estimates of pixel-based length. When vehicles’ moving blobs

merge, the VVDC system cannot separate the connected blobs and hence overestimates

vehicle length and over-counts trucks. For some combination trucks with two containers

connected by a hitch, the vehicle length calculation algorithm failed to find the front edge

of the vehicle and, therefore, misclassified it as two short vehicles. Trucks with a trailer

or bed whose color was similar to the image background experienced similar problems.

The prototype VVDC system developed in this study cannot handle vehicle

occlusions, severe camera vibrations, and head light reflection problems at the current

stage. Depending on the frequency of these problems, the actual application results may

vary from site to site.

Page 93: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

79

PART IV PAIRED VIDEO AND SINGLE-LOOP SENSORS

Because WSDOT has both surveillance video cameras and single-loop detectors

deployed along its major freeway corridors in the greater Seattle area, it was of practical

interest to explore whether more accurate speed estimates could be achieved by

combining video and single-loop data. At WSDOT’s request, we also investigated the

idea of pairing video and single-loop sensors for better speed estimates in this study.

8.0 PAIRED VIDEO AND SINGLE-LOOP SENSOR ALGORITHM

8.1 INTRODUCTION

As described in Section 2.1, a major challenge of calculating speed with Athol’s

algorithm, shown in Equation (2-1) (for readers’ convenience, it is rewritten as Equation

(8-1)), is determination of the g value for each time interval.

giOTiViss ⋅⋅

=)()()( (8-1)

where i = time interval index ss = space mean speed in mph for each interval

V = vehicles per interval O = lane occupancy in percentage of time the detector is occupied T = the number of hours per interval g = speed estimation parameter with units of 100-mile-1.

When a noteworthy number of LVs is in the traffic stream, the MEVL may vary

significantly from interval to interval. Without individual vehicle length information, it is

very challenging to determining the g value for each time interval. However, if an

interval does not contain any LVs, we can use the mean of the observed SV length

distribution (Wang and Nihan, 2003) to approximate the average length of the vehicles

detected in the interval because SV lengths vary narrowly, as described in Section 3.1.

Page 94: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

80

Consequently, assuming that the average speeds of SVs and LVs in the same traffic

stream are similar, a constant g value can be calculated by using Equation (3-6) (for

readers’ convenience, it is rewritten as Equation (8-2)).

( ) βμ ⋅+=

loopsv lg 80.52 (8-2)

where loopl is the loop detector length, SVμ is the mean of the SV lengths, and β is the

sensitivity correction parameter. A simple method for calibrating β is described in

Section 3.2.2. By using intervals without LVs, the calculated constant g value can be

applied to Equation (8-1) for speed estimates.

Section 3.2 described a revised region growing approach for separating intervals

with LVs from those without. The algorithm was based on two assumptions: (1) there is

at least one interval in a period that does not contain LVs, and (2) vehicle speeds are

constant over the period. The algorithm works well when both assumptions hold. When

traffic is congested or when LV volume is high, however, violating one or both

assumptions, the revised region growing algorithm may result in biased speed estimates.

In order to identify SV-only intervals for speed estimates, video and single-loop

sensors may be paired up at locations where both sensors exist. A paired video and

single-loop (Paired VL) sensor system takes advantage of both video data and single-loop

data. Because of the relative size differences between SVs and LVs, LVs can be easily

identified from video data. However, because of the projection effect from a 3-

dimentional space to a 2-dimentional plane, lane occupancy estimates from video data

may contain remarkable errors, especially for video data captured by uncalibrated

surveillance video cameras. Conversely, single-loop detectors provide accurate lane

occupancy measurements but do not directly generate vehicle composition data.

Page 95: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

81

Combining video data with single-loop data, therefore, may result in improved speed

estimates.

8.2 ALGORITHM DESIGN

In the Paired VL sensor system, the VVDC system processes video data for

interval LV volume estimates. Intervals containing LVs are discarded from the speed

calculations. Therefore, intervals used for calculating speed estimates with Equation (8-

1) are those containing only SVs. Because SV lengths vary narrowly around their mean, a

constant g value calculated with Equation (8-2) can produce reasonably accurate speed

estimates.

Video and single-loop sensors that form a Paired VL sensor system must be

spatially close to each other. Ideally the single-loop detector should be visible in the

video camera’s field of view. Before the paired video and single-loop sensor data can be

fused, the two data sequences must be time synchronized. The Paired VL sensor

algorithm, therefore, contains three steps: (1) time and location synchronization between

the selected single-loop sensor and the VVDC system, (2) LV interval identification, and

(3) interval speed calculation. If an interval is identified as containing at least one LV, its

measurements are not used for the speed calculation. Instead, the speed calculated from

the previous interval is used as the speed for the current interval. Figure 8-1 shows the

flow chart for this algorithm.

Page 96: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

82

Figure 8-1: Flow Chart for the Paired VL Sensor Algorithm

8.2.1 Time Synchronization

It is very challenging to synchronize the clocks between the video and the single-

loop data sequences because of uncertainties with the data collection, compression, and

transmission processes. The video signal from a WSDOT surveillance camera is

transmitted to a control cabinet via coaxial cable and then sent to a communications hub

via fiber optic cable. At the communications hub, it is combined with video signals from

many other cameras through the Frequency Division Multiplexing (FDM) technique. The

combined video signals are then transmitted to the WSDOT TSMC in Shoreline via glass

fiber. At the TSMC, the received video signals are de-multiplexed and connected

Page 97: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

83

through a switch to a Digital VAX computer. This entire procedure is illustrated in

Figure 8-2 (WSDOT, 2006).

Figure 8-2: Schematic of the WSDOT Video Signal Communication System Source: WSDOT (2006).

Video signals from the selected camera can be transmitted via live video feed

fiber cable from the TSMC to the STAR Lab (details of this cable are discussed in

Section 6.1). The latency of video signals received at the STAR Lab is approximately 30

milliseconds.

While the latency of the video data is relatively stable, the delay of loop detector

data is volatile. A dual-loop’s measurements are processed and integrated into 20-second

intervals by the controller to which the dual-loop is connected. The calculated interval

mean speed, vehicle length, and bin volume data are then shipped to the TSMC over the

Page 98: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

84

WSDOT sensor data communication network. Depending on the level of congestion

over the communications network, transmission delay varies. In addition to the

transmission delay, system delay induced by the controller makes predicting total delay

of the loop detector data more challenging. System delay is defined as the difference

between the measurement time of a 20-second interval and the shipment time of the data

to the TSMC. Although the controller clock is synchronized four times a day, the system

delay for a particular loop cannot be easily estimated. At each WSDOT dual-loop station,

the controller has a predefined order for stepping through the detector pairs. However,

such an order is station dependent and cannot be identified by looking at the detector list.

Controllers at dual-loop stations are configured for 40 detectors, but usually only 10 to 20

of them are defined. Suppose a dual-loop station has 20 inputs. The controller processes

the detector pairs in the decreasing order of their input positions. For example, if a

measurement data set was sent to the TSMC at 9:12:40, then the measurement interval

for the dual loop connected to the first input position is 9:12:21 – 9:12:40 and that for the

dual loop connected to the twentieth input position is 9:12:01 – 9:12:20. The system

delay for each input position is shown in Table 8-1.

Table 8-1: Processing Delay for Each Input Position

Input Position

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Delay (Seconds)

1 2 3 4 5 6 7 8 9 10 11 12 13 14 15 16 17 18 19 20

Because a detector may be attached to any input position, the mapping

relationships between detector pairs and input positions at a dual-loop station are random.

Page 99: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

85

We cannot determine the system delay for a given dual-loop detector from its location on

the road.

As soon as interval measurements arrive at the TSMC server computer, they are

time-stamped and broadcast by the loop_client application (see Section 3.3 for a brief

description of loop_client) over the Internet. Because the time stamp of a data set

includes both system delay and transmission delay, time synchronization of video and

single-loop sensors becomes a very challenging issue. Time synchronization cannot be

achieved by simply coordinating video and single-loop time stamps.

Although random delays are associated with the video or loop data transmission

processes, these random delays account for a very small portion of overall delay.

Therefore, the major portion of the time lag between video and single-loop time-stamps is

relatively stable and can be identified through matching single-loop measured volumes

and VVDC recorded volumes. Assume that this time lag is tl and tl∈[tlmin, tlmax]. Then

the purpose of the synchronization process is to determine the value of tl. In this study,

we propose a minimum-error-based approach for time synchronization. Because single-

loop measurements are aggregated data of 20-second intervals, outputs from the VVDC

system must be integrated into 20-second intervals for comparison. The VVDC system

summarizes traffic counts into 20-second interval counts. The beginning time of each

interval rotates from tlmin to tlmax with an increment of 1 second. This implies that each

interval volume measured by a single-loop detector will have vn = tlmax - tlmin + 1 video

measured volumes to compare to. After tm minutes, a total of 3*tm single-loop measured

interval volumes are produced. Correspondingly, vn sets of interval volumes (each set

contains 3*tm measurements) are produced by the VVDC system during the same period.

Page 100: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

86

Equation (8-3) is then used to calculate the mean absolute errors between a video-

measured volume sequence and the loop-measured volume sequence.

tm

vntltljtltVtVe

tm

iiViL

j *3

))1/()(*)1(()(*3

1minmaxmin∑

=

−−−++−= (8-3)

where, e is the sum of absolute errors, j is video data sequence index, LV is the loop

measured interval volume, VV is the VVDC system-produced interval volume, and ti

represents the ending time of interval [ti – 20, ti]. If

ju ee ≤ and 0eeu ≤ for j∈[1, vn] (8-4)

then the uth video sequence matches the single-loop volume sequence the best and is an

acceptable match sequence based on the error threshold e0. The time lag can be

determined as

)1/()(*)1( minmaxmin −−−+= vnttuttl lll (8-5)

Once tl is available, data from a paired video single-loop and video sensors can be

fused for improved speed estimates.

8.2.2 LV Interval Identification

The VVDC system logs each detected vehicle with detection time and vehicle

category information. When vehicles detected over a 20-second interval are counted for

comparison, the VVDC system also checks to see whether one or more LVs are detected.

Intervals containing at least one LV are marked as LV intervals. Data from LV intervals

are not used to calculate speed.

Page 101: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

87

8.2.3 Interval Speed Calculation

If an interval contains SVs only, its average vehicle length should be very close to

the observed mean of the SV length distribution. This indicates that its g value can be

approximated by using Equation (8-2). Given that SV lengths vary narrowly, the uniform

vehicle length assumption required by Athol’s equation can be satisfied. This ensures

reasonably accurate speed estimates with Equation (8-1) for SV-only intervals.

For intervals with one or more LVs, average vehicle length may vary significantly

from interval to interval. For these intervals, a constant g value is inappropriate for speed

estimation with Equation (8-1). Because vehicle length is not available from single-loop

measurements, calculating the g value for each interval is not realistic. However, because

we do know the average length of SVs, and their length distribution has a narrow

variance, we can calculate the speed for intervals that contain only SVs. Assuming that

speeds do not vary much from interval to interval, we can use the speeds calculated for

the SV-only intervals for other intervals that are adjacent or relatively close in time.

Therefore, to avoid biased speed estimation, we drop intervals with LVs from the speed

calculation. If an interval contains one or more LVs, the speed estimated for its nearest

previous interval is assigned to the current interval.

Page 102: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

88

9.0 SYSTEM DEVELOPMENT FOR PAIRED VIDEO AND SINGLE-LOOP SENSORS

9.1 SYSTEM DESIGN

A Paired VL system that implements the algorithm described in Section 8.2 was

developed with Microsoft Visual C#. Figure 9-1 shows the flow chart for this system.

This Paired VL system is based on the ST-Estimator and the VVDC system

introduced in Part II and Part III of this report. Live video feed is directly connected to

system. A single-loop detector that matches the video location is manually specified by a

user. Then the system can estimate speeds by fusing video and single-loop sensor data.

As shown in Figure 9-1, the Paired VL system consists of four modules: the Vehicle

Detection and Length Classification module, the Single-Loop Data Downloading module,

the Time Synchronization module, and the Speed Estimation module. Details of each

module are described in the section below.

Page 103: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

89

Choose the appropriate sensor location

VVDC System

Interval Volume Queue

Error calculation and comparison

Video_basedVehicle Count

20-sec loop data

Interval Volume and Occupancy Queue

Loop_basedVehicle Count

Optimal tl

Interval measurements

Empirical VehicleLength Distribution

Standard Length of SVs

Synchronized Volume and Occupancy

Speed Estimation for 20-sec Interval

Include LVs? Previous Speed Estimation In the last interval

Output EstimatedSpeed

No

Yes

Time Synchronization Module

Speed Estimation Module

Vehicle Detection and Length Classification Module

Single Loop Data Downloading

ModuleVideo data

Synchronized?Generating

20-sec countsevery second

No

Yes

Individual vehicle data

Synchronized?No

Yes

selected loop data

Choose the appropriate sensor location

VVDC System

Interval Volume Queue

Error calculation and comparison

Video_basedVehicle Count

20-sec loop data

Interval Volume and Occupancy Queue

Loop_basedVehicle Count

Optimal tl

Interval measurements

Empirical VehicleLength Distribution

Standard Length of SVs

Synchronized Volume and Occupancy

Speed Estimation for 20-sec Interval

Include LVs? Previous Speed Estimation In the last interval

Output EstimatedSpeed

No

Yes

Time Synchronization Module

Speed Estimation Module

Vehicle Detection and Length Classification Module

Single Loop Data Downloading

ModuleVideo data

Synchronized?Generating

20-sec countsevery second

No

Yes

Individual vehicle data

Synchronized?No

Yes

selected loop data

Figure 9-1: Flow Chart for the Paired VL System

Page 104: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

90

9.2 SYSTEM IMPLEMENTATION

9.2.1 Vehicle Detection and Length Classification (VDLC) Module

The core of this module is the VVDC system. It takes live video input or digitized

video images for vehicle counts and classification. The arrival time and type for each

detected vehicle are logged by this module. If the video and single-loop data clocks have

not been synchronized, the VDLC module integrates individual vehicle data from the

video feed into 20-second interval counts every second. For example, it counts the

number of vehicles detected from 9:30:20 through 9:30:39 at 9:30:40 and that from

9:30:21 through 9:30:40 at 9:30:41. After the system has been synchronized, this module

produces a video-counted interval volume every 20 seconds. Also, a flag indicating

whether this interval contains LVs is attached to the output of each interval. Readers are

referred to Part III of this report for technical details on how the video-based vehicle

detection and classification tasks are performed in the VVDC system.

9.2.2 Single-Loop Data Downloading (SLDD) Module

The function of the SLDD module is to secure single-loop data input. It uses the

service provided by loop_client, an application developed by the UW ITS Research

Program. The loop_client application broadcasts loop detector measurements every 20

seconds on a designated server port (by default, it uses 9004). The SLDD module

connects to the loop_client server port with the TCP protocol to download all the loop

detector data. The user then selects a particular loop and extracts data from the specified

loop detector. If the clocks of the video and single-loop data have not been synchronized,

the loop’s 20-second interval measurements are sent to the Time Synchronization module

Page 105: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

91

for time synchronization. Otherwise, loop data enter a data queue for the Speed

Estimation module to use for speed estimates.

9.2.3 Time Synchronization (TS) Module

The TS module is a very important part of the Paired VL system because the

video and single-loop data sequences cannot be properly fused without time

synchronization. As metioned in Section 8.2.1, the time difference between the two

clocks is affected by both transmission delay and system delay. However, direct

measurement of transmission delay and system delay is very difficult to accomplish.

Although individual vehicle arrival data are desirable for time synchronization, and the

VVDC system is able to provide such data, we are not able to use such disaggregated

data for time synchronization because the corresponding loop detector data to be

synchronized with the VVDC data have been aggregated into 20-second intervals by the

WSDOT loop detection system. Therefore, both video and loop clocks must be

synchronized on the basis of 20-second vehicle counts. Each single-loop detected interval

volume has vn video-based interval volumes to compare for the best match. For any

interval, there may be more than one match. However, if one looks at tm minutes of data,

there are 3*tm available intervals. The chance of having multiple matches for all 3*tm

intervals decreases quickly as tm increases. Therefore, if tm is large enough, time

synchronization can be satisfactorily achieved by using 20-second interval counts. In our

implementation, all of the parameters used in our time sychronization approach (tlmin,

tlmax, tm, and e0) can be specified by users. The default values for these parameters are

tlmin=-60 seconds, tlmax=60 seconds, tm=5 minutes, and e0=0.3 vehicle/interval. Figure 9-

2 shows a snapshot of the time synchronization interface.

Page 106: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

92

Figure 9-2: The Time Synchronization Module for Video and Loop Subsystems

The TS module calculates the sum of absolute error, ej, by using Equation (8-3).

Then it finds the tl that corresponds to the smallest sum of absolute error, eu. Once the

video and single-loop data time stamps have been synchronized, the Paired VL system

recognizes the time difference between the video and single-loop data sequences. Proper

adjustments are made to the data sequences so that they can be fused for improved speed

estimates.

However, for cases in which the position of the single-loop detector is not visible

in the video camera’s field of view, time synchronization will be much more complicated

because of the travel time variation between the virtual loop locations in the VVDC

system and the actual single-loop locations. If vehicle speed varies significantly from

Page 107: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

93

time to time, time synchronization may fail because no satisfactory tl can be found to

satisfy Equation (8-4).

9.2.4 Speed Estimation (SE) Module

The SE module uses both the interval volume counted by the VVDC system and

the interval volume and occupancy measured by the single-loop detectors for speed

estimates. If the VVDC system has set the LV flag to true, then at least one LV is present

in the current interval. Given the fact that the g value cannot be properly calculated when

one or more LVs are present, the SE module does not conduct a speed calculation in this

situation. Instead, it loads the most recent speed estimate as the current interval speed. If

no LV is detected in the current interval, then, because of the features of SV length

distribution, the g value calculated by Equation (8-2) should be very close to the ground

truth g value. By using this g value, the space-mean speed for the current interval can be

calculated with Equation (8-1). By pairing video and single-loop sensors, we can take

advantage of the simplicity of Athol’s algorithm and still avoid the speed estimation bias

that would be introduced by intervals containing LVs. Figure 9-3 shows a snapshot of

the speed estimation interface for the Paired VL system.

Given that the specific parameters used for speed estimation may be different

from location to location, the SE module offers users a function for specifying the values

for these parameters, such as the mean vehicle length and loop sensitivity correction

coefficient. The SE module also plots the histogram of vehicle lengths for visual

verification purposes. Additionally, the SE module provides users the freedom to use

archived loop data. This function is especially useful for system tests in which recorded

Page 108: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

94

videotapes and archived loop detector data may be used to evaluate system performance.

Estimated speeds can be displayed on screen or stored in a user-specified output file.

Figure 9-3: A Snapshot of Speed Estimation for a 20-Second Interval

Page 109: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

95

10.0 TEST OF THE PAIRED VIDEO AND SINGLE-LOOP SYSTEM

10.1 TEST SITES AND DATA

Test sites must be selected on the basis of the following three criteria:

1. the paired video and single-loop sensors must be physically close to each

other

2. the single-loop detectors must be part of dual-loop detector stations that are

available and in good working condition

3. longitudinal occlusion is rare so that the VVDC system can produce

reasonably accurate results.

With the help of WSDOT operational experts, two test sites were selected on

northbound I-5 for testing the Paired VL system. Test site I comprised loop station ES-

137R at milepost 169.79 on I-5 and the WSDOT surveillance video camera (ID = 12) at

milepost 169.39 on I-5 near NE 45th Street. Test site II comprised loop station ES-168R

at milepost 174.58 on I-5 and the WSDOT surveillance video camera (ID = 4) at NE

145th Street. The distance between the loop station and the camera was about 0.1 mile at

this site.

For test site I, a virtual loop detector was placed on the second lane of northbound

I-5 for video detection, as shown in Figure 10-1. Single-loop measurements from ES-

137R: MMN__2 (the single loop on lane two) were fused with the data from this virtual

loop. Similarly, a virtual loop and the single loop (ES-168R:MMN__2) on the second

lane of northbound I-5 were paired on test site II. At both test sites, dual-loop detectors

were available for speed and bin volume measurements. Dual-loop measured speeds were

used to verify speeds estimated from the Paired VL system.

Page 110: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

96

Figure 10-1: A Snapshot of Test Site I for the Paired VL System

10.2 TEST RESULTS AND DISCUSSION

The Paired VL system was tested for 60 minutes at each test site. To

quantitatively evaluate the performance of the system, the authors defined a statistical

variable called “estimation error.” It was defined as the absolute difference between the

estimated speed and the dual-loop observed speed for each 20-second interval. The

Paired VL system was used to produce interval speed estimates. For comparison

purposes, interval speeds were also estimated with the traditional algorithm, which

directly applies unfiltered single-loop measurements to Equation (8-1) by using a

Page 111: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

97

constant g value obtained from free-flow data for the site. The mean and standard

deviation of estimation errors were calculated for each test case and for both the Paired

VL and traditional algorithms. Table 10-1 shows the test results at both test sites,

including dual-loop measured speeds, Paired VL system estimated speeds, traditional

algorithm estimated speeds, and the estimation error statistics for each method.

Table 10-1: Online Test Results from the Two Test Locations

Before a test was started at each site, a 10-minute speed estimation was conducted

to calibrate the loop sensitivity correction coefficient, β. The sensitivity correction

coefficient was chosen so that the mean estimation error for the traditional algorithm was

equal to zero. Then the same β was applied throughout the test period for both the Paired

VL system and the traditional algorithm. For test site I, the mean and standard deviation

of estimation error for the traditional algorithm were 4.41 mph and 5.51 mph,

Test Site One Test Site Two

Loop Code & Camera ID

ES-137R:_MN__T2 & Camera ID 12

ES-168R:_MNN__2 & Camera ID 4

Location NE 45th Street Northbound (milepost 169.39)

NE 145th Street Northbound (milepost 174.58)

Test Time Period 5:35-6:35 PM on 09-Apr-2006 11:00-12:00 PM on 12-Aug-2006

Loop Sensitivity Correction Coefficient

1.079 1.020

Mean of the Dual-Loop Observed Speeds

66.19 62.39

Mean of the Paired VL System Estimated Speeds

64.56 60.77

Mean of the Traditional Algorithm Estimated Speeds

64.47 60.23

Mean 4.00 6.43 Estimation Error for the Paired VL System (mph)

Standard Deviation

4.30 5.81

Mean 4.41 7.01 Estimation Error for the Traditional Algorithm (mph) Standard

Deviation 5.51 6.53

Page 112: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

98

respectively. Those for the Paired VL system were 4.00 mph and 4.30 mph, respectively.

Obviously, the Paired VL system provided more accurate speed estimates. Speed curves

for the dual-loop observed speeds, Paired VL system estimated speeds, and traditional

algorithm estimated speeds are plotted in Figure 10-2.

For test site II, the estimation error statistics showed that the Paired VL system

also performed better than the traditional algorithm. Figure 10-3 provides visual

comparisons of the speed curves for test site II.

0102030405060708090

17:3

5:10

17:3

7:10

17:3

9:10

17:4

1:10

17:4

3:10

17:4

5:10

17:4

7:10

17:4

9:10

17:5

1:10

17:5

3:10

17:5

5:10

17:5

7:10

17:5

9:10

18:0

1:10

18:0

3:10

18:0

5:10

18:0

7:10

18:0

9:10

18:1

1:10

18:1

3:10

18:1

5:10

18:1

7:10

18:1

9:10

18:2

1:10

18:2

3:10

18:2

5:10

18:2

7:10

18:2

9:10

18:3

1:10

18:3

3:10

18:3

5:10

Time

Spee

d (m

ph)

Dual-Loop Observed SpeedsPaired VL System Estimated SpeedsTraditional Algorithm Estimated Speeds

Figure 10-2: Comparison between Observed Speeds and Estimated Speeds at Test Site I

0102030405060708090

11:0

0:13

11:0

2:33

11:0

4:53

11:0

7:14

11:0

9:33

11:1

1:53

11:1

4:13

11:1

6:33

11:1

8:53

11:2

1:14

11:2

3:33

11:2

5:54

11:2

8:14

11:3

0:34

11:3

2:54

11:3

5:14

11:3

7:34

11:3

9:54

11:4

2:14

11:4

4:34

11:4

6:54

11:4

9:14

11:5

1:34

11:5

3:54

11:5

6:14

11:5

8:34

Tme

Spee

d (m

ph)

Dual-Loop Observed SpeedsPaired VL System Estiamted SpeedsTraditional Algorithm Estiamted Speeds

Figure 10-3: Comparison between Observed Speeds and Estimated Speeds at Test Site II

Page 113: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

99

We can see that in both figures 10-2 and 10-3, the speed curves generated by the

Paired VL system are closer to the dual-loop speed curves than the curves generated by

the traditional algorithm. This shows that the Paired VL system provided better speed

estimates than the traditional algorithm, which has been widely used by traffic systems

management centers for traffic speed estimation.

During the test process, we noticed that false dismissals of long vehicles were a

major source of mistakes generated by the Paired VL system. If an interval contains one

or more LVs but is not flagged as an LV interval, the Paired VL system will provide

biased speed estimates. Conversely, longitudinal occlusion may generate false alarms of

LVs and hence make speed updates less frequent. In addition to the two major error

causes, random delays during data transmission may disturb the synchronized process in

fusing video and single-loop data and result in speed estimation errors.

Note that both tests were conducted under un-congested conditions. Because the

current VVDC system is not capable of producing good vehicle detection and

classification data under traffic conditions with significant vehicle occlusions, the authors

were not able to test the Paired VL system under congested scenarios. Nonetheless, the

concept of the paired video and single-loop sensor system was demonstrated to a certain

extent.

10.3 TEST SUMMARY FOR THE PAIRED VL SYSTEM

To improve the accuracy of traffic speed estimation, a paired video and single-

loop sensor algorithm was developed and implemented as the Paired VL system

described in this report. The algorithm combines video-based vehicle detection and

classification results with single-loop measurements to avoid the biased impacts of LVs

Page 114: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

100

on traffic speed estimates. Two test sites were selected to evaluate the performance of

the Paired VL system. The means of estimation error for the Paired VL system were 4.00

mph and 6.43 mph for test sites I and II, respectively. In comparison to the speed

estimates produced by the traditional algorithm, the Paired VL system produced better

speed estimation accuracy in both tests.

Investigation of Paired VL system errors showed that false dismissals of trucks

and longitudinal occlusions were major causes of speed estimation errors. Also, random

delays during data transmission could sometimes disturb the synchronized data sequences

and result in estimation errors.

Page 115: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

101

PART V SUMMARY

11.0 CONCLUSIONS AND RECOMMENDATIONS

11.1 CONCLUSIONS

Traffic speed and truck volume data are important variables for transportation

planning, pavement design, traffic safety, traffic operations, and car emission control.

However, these data are not directly measured by single-loop detectors, the most widely

available type of sensor on the existing roadway network. To obtain quality estimates of

traffic speed and truck volume data with existing freeway surveillance equipment, several

algorithms were developed and implemented in this study.

First, a new speed estimation algorithm that uses single-loop data was developed.

This algorithm implements the region growing mechanism commonly used in video

image processing. This region growing algorithm, together with the vehicle classification

algorithm developed by Wang and Nihan (2003), was implemented in the ST-Estimator

for improved speed and truck volume data. In tests of the ST-Estimator, the new speed

algorithm outperformed both the traditional algorithm and the speed estimation algorithm

developed by Wang and Nihan (2003). By using the speed estimated with this algorithm,

LV volumes were estimated with the approach based on the Nearest Neighbor Decision

rule. LV volume errors estimated at three test locations (the second lanes at Station ES-

167D, station ES-172R, and station ES-209D) were within 7.5 percent over a 24-hour

period. The ST-Estimator test results indicated that the ST-Estimator can be applied to

obtain reasonably accurate speed and LV volume estimates at single-loop stations.

Second, several computer-vision based algorithms were developed or applied to

extract the background image from a video sequence, detect the presence of vehicles,

Page 116: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

102

identify and remove shadows, and calculate pixel-based vehicle lengths for classification.

These algorithms were implemented in the prototype VVDC system written with the

Microsoft Visual C#. As a plug and play system, the VVDC system is capable of

processing live video signals in real time. A WinTV-USB card was used to capture live

video images. The VVDC system can also be used to process digitized video images in

the JPEG or BMP formats. Because the VVDC system does not require camera

calibration, it can be easily applied to locations with existing surveillance video cameras.

Also, users are allowed to specify the bin length threshold to collect desired types of

vehicles with the VVDC system.

The VVDC system was tested at three test locations under different traffic and

environmental conditions. The accuracy of vehicle detection was over 97 percent, and

the total truck count error was lower than 9 percent for all three tests. This implies that

the video image processing method developed for vehicle detection and classification in

this study is indeed a viable alternative for truck data collection. However, the prototype

VVDC system is currently designed to work in daytime lighting and under conditions

without longitudinal vehicle occlusion and severe camera vibration.

Third, a speed estimation algorithm using paired video and single-loop sensor

inputs was designed. The core idea of this algorithm is to use a video sensor to screen

out intervals containing LVs before using single-loop measurements for speed estimation.

The traditional speed estimation algorithm is based on the assumption of uniform vehicle

length. When a significant number of LVs are present in a traffic stream, the mean

effective vehicle length may vary significantly from interval to interval and hence violate

the uniform vehicle length assumption. If intervals containing LVs are used in the speed

Page 117: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

103

calculations, biased speed estimates will result. The paired video and single-loop sensors

rely on video image processing for LV detection and single-loop data for speed

calculation. If an interval is identified as containing one or more LVs, its single-loop

measurements are dropped from the speed calculations. Instead, the most recently

calculated interval speed is assigned to the interval containing LVs. A paired video and

single-loop algorithm was implemented in the Paired VL system described in this report.

Video and single-loop data from two test sites were used to evaluate the performance of

the Paired VL system. The authors’ experiments indicated that speeds estimated by the

Paired VL system were more accurate than speeds estimated by the traditional algorithm.

Extreme values resulting from LV presence were effectively eliminated. However,

finding a location with both video and single-loop sensors may not be easy. Also, time

synchronization for the Paired VL system is very challenging, and detection errors from

the video sensor may significantly degrade the performance of the Paired VL system. All

these factors cast shadows over the applicability of the Paired VL system, although the

potential effectiveness of the idea was demonstrated in this study.

In short, several algorithms and corresponding computer tools were developed for

improved speed and truck data in this study. The authors conclude that quality speed and

truck volume data can be estimated from single-loop data by applying the ST-Estimator.

Although the prototype VVDC system now works only under certain restricted

conditions, the potential utility and effectiveness of the system were demonstrated, and

the authors conclude that further development of the VVDC system is warranted. Given

that surveillance video cameras have been increasingly deployed in recent years, the

VVDC system can be a cost-effective solution for turning such surveillance video

Page 118: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

104

cameras into video detectors when necessary. For locations with both video and single-

loop sensors, speed estimates can be improved by combining video data with single-loop

data.

11.2 RECOMMENDATIONS

The authors recommend further studies in the following two directions:

(1) Improve the accuracy and applicability of the VVDC system. Major issues

deserving further research effort include the following:

Traffic occlusion. Traffic occlusion typically results from inappropriate video

camera location, flat pitch angle of cameras, and heavy traffic volumes on the road.

Some mathematic models, such as the Markov Random Field models and motion-

based features, may be used to handle this problem.

Camera vibration. Most surveillance video cameras have vibration problems due to

wind or road infrastructure shaking. Algorithms based on background subtraction are

extremely sensitive to camera vibrations. Feature-based detection may be a good

solution to this problem.

Reflection. In front fire detection, reflection of vehicle headlights may cause early

detection and overestimation of vehicle length. Models for reflection rejection are

needed to improve the accuracy of the VVDC system.

(2) Investigation of loop detector data accuracy. When the Paired VL system was

tested, the authors found that video-recorded, 20-second counts sometimes varied

significantly from single-loop counts. This vehicle count inconsistency made it very

difficult to synchronize the video and single-loop clocks. In the process of finding good

test sites, the authors studied loop data quality at several stations by comparing the

Page 119: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

105

ground-truth volumes resulting from manual video counts with single-loop measured

volumes. The authors found that many single-loop detectors have noticeable problems of

false alarms and false dismissals. The Paired VL system could be easily modified to be

an effective tool for verifying the working status of loop detectors.

The authors believe that an improved VVDC system would be very useful for

collecting freeway speed and truck volume data. It could also be applied to collect

intersection performance measures by using onsite surveillance or detection cameras. In

addition to the computer that hosts the VVDC system, a WinTV card is the only piece of

hardware required. The VVDC system, therefore, could provide a cost-effective solution

for automatic traffic data collection at locations with surveillance or detection cameras.

Page 120: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

106

ACKNOWLEDGMENTS

The authors would like to acknowledge the financial support for this project from

Transportation Northwest (TransNow), the USDOT University Transportation Center for

Federal Region 10, and the Washington State Department of Transportation. The authors

also wish to express sincere appreciation to WSDOT and TransNow personnel,

specifically Morgan Balogh, Pete Briglia, Vinh Dang, Mark Morse, Michael Forbis, John

Rosen, and David Bushnell, for their valuable suggestions and kind help in setting up the

video-feed link to the STAR Lab. Special thanks to Dr. Dan Dailey and Mr. Fritz Cathey

for providing and trouble shooting the video switch program.

Page 121: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

107

REFERENCES

American Association of State Highway and Transportation Officials (AASHTO). 2004.

A Policy on Geometric Design of Highways and Streets. Fifth Edition, AASHTO,

Washington D.C.

Anderson, I.B. and R.A. Krammes. 2000. New Consistency Model for Rural Highways

and Its Relationship to Safety. ASCE Journal of Transportation Engineering, Vol.

130, No. 3. 286-293.

Aredonk, J. 1996. A comparison of real-time freeway speed estimation using loop

detectors and AVI technologies. Compendium: Graduate Student Papers on

Advanced Surface Transportation Systems, Southwest Region University

Transportation Center, Texas Transportation Institute, Texas A&M University

System, College Station, TX, J-i – J-140.

Avery, R. P., Y. Wang and G. S. Rutherford. 2004. Length-Based Vehicle Classification

Using Images from Uncalibrated Video Cameras. Proceedings of the 7th

International IEEE Conference on Intelligent Transportation Systems, pp. 737-

742.

Athol, P. 1965. Interdependence of Certain Operational Characteristics within a Moving

Traffic Stream. Highway Research Record 72, pp. 58-87.

Bonneson, J. and M. Abbas. 2002. Video Detection for Intersection and Interchange

Control. FHWA/TX-03/4285-1. Texas Transportation Institute. College Station,

Texas.

Canny, J. 1986. A Computational Approach to Edge Detection. IEEE Transactions on

Pattern Analysis and Machine Intelligence, Vol. 8, No. 6, pp. 679-698.

Cheevarunothai, P., Y. Yang and N. L. Nihan. 2005. Development of Advanced Loop

Event Data Analyzer (ALEDA) for Investigations of Dual-Loop Detector

Page 122: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

108

Malfunctions. Proceeding CD-Rom for the 12th World Congress on Intelligent

Transportation Systems, San Francisco, California.

Cherrett, T., H. Bell, M. McDonald. 2000. Traffic Management Parameters from Single

Inductive Loop Detectors. Transportation Research Record, No. 1719, 112-120,

Washington D.C.

Coifman, B. 2001. Improved Velocity Estimation Using Single Loop Detectors.

Transportation Research, Part A, Vol. 35, No. 10, 863-880.

Coifman, B., S. Dhoorjaty, and Z. Lee. 2003. Estimating Median Velocity Instead of

Mean Velocity at Single Loop Detectors. Transportation Research, Part C, Vol.

11, No. 3-4, 211-222.

Courage, K. G., C. S. Bauer, and D. W. Ross. 1976. Operating parameters for main-line

sensors in freeway surveillance systems, Transportation Research Record 601,

TRB, National Research Council, Washington, D.C., pp. 19-26.

Cucchiara, R., C. Grana, M. Piccardi, and A. Prati. 2003. Detecting Moving Objects,

Ghosts, and Shadows in Video Streams. IEEE Transactions on Pattern Analysis

and Machine Intelligence, Vol. 25, No. 10, pp. 1337-1342.

Cunagin, W.D. and C.J. Messer. 1983. Passenger Car Equivalents for Rural Highways.

Transportation Research Record 905, TRB, National Research Council,

Washington, D.C., pp. 61-68.

Dailey, D.J. 1999. A Statistical Algorithm for Estimating Speed from Single Loop

Volume and Occupancy Measurements. Transportation Research B, Vol. 33B,

No. 5, pp. 313-22

EPA (US Environmental Protection Agency). 2001. National Air Quality and Emissions

Trends Report, 1999. EPA 454/R-01-004. EPA. North Carolina.

Page 123: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

109

Fung, G.S.K., N.H.C. Yung, G.K.H. Pang, and A.H.S. Lai. 2002. Effective Moving Cast

Shadow Detection for Monocular Color Traffic Image Sequences. Optical

Engineering, Vol. 41, No. 6, pp. 1425-1440.

Gamba, P., M. Lilla, and A. Mecocci. 1997. A Fast Algorithm for Target Shadow

Removal in Monocular Colour Sequences. Proceedings of the International

Conference on Image Processing, Vol. 1, pp. 436-447.

Gerlough, D. L., and M. J. Huber. 1975. Traffic Flow Theory, A Monograph, TRB

Special Report 165, TRB, National Research Council, Washington, D.C.

Graettinger, A.J., R.R. Kilim, M.R. Govindu, P.W. Johnson, and S.R. Durrans. 2005.

Federal Highway Administration Vehicle Classification from Video Data and a

Disaggregation Model. Journal of Transportation Engineering, Vol. 131, No. 9.

pp. 689-698.

Gronbeck, C. 2004. SunAngle. Accessed online at http://www.susdesign.com/sunangle/

on 09 December 2005.

Gu, X., D. Yu, and L. Zhang. 2005. Image Shadow Removal Using Pulse Coupled Neural

Network. IEEE Transactions on Neural Networks, Vol. 16, Issue 3, pp. 692-698.

Gupte, S., O. Masoud, R.F.K. Martin, and N.P. Papanikolopoulos. 2002. Detection and

Classification of Vehicles. IEEE Transactions on Intelligent Transportation

Systems, Vol. 3, No. 1, pp. 37-47.

Hallenbeck, M. 1993. Seasonal Truck Volume Patterns in Washington State.

Transportation Research Record 1397, TRB, National Research Council,

Washington, D.C., pp. 63-67.

Hasegawa, O. and T. Kanade. 2005. Type Classification, Color Estimation, and Specific

Target Detection of Moving Targets on Public Streets. Machine Vision and

Applications, Vol. 16, No. 2, pp. 116-121.

Page 124: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

110

Hellinga, B. R. 2002. Improving Freeway Speed Estimates from Single-Loop Detectors.

ASCE Journal of Transportation Engineering, 128(1), 58-67.

Hsieh, C., E. Lai, Y. Wu, and C. Liang. 2004. Robust, Real Time People Tracking with

Shadow Removal in Open Environment. 5th Asian Control Conference, Vol. 2,

pp. 901-905.

ITE (Institute of Transportation Engineers). 1997. Traffic Detector Handbook. Second

Edition. ITE. Washington D.C.

Kim, J.J., S. Smorodinsky, M. Lipsett, B.C. Singer, A.T. Hodgson, and B. Ostro. 2004.

Traffic-related Air Pollution near Busy Roads: The East Bay Children’s

Respiratory Health Study. American Journal of Respiratory and Critical Care

Medicine, Vol. 170, pp. 520-526.

Kwon, J., P. Varaiya, and A. Skabardonis. 2003. Estimation of Truck Traffic Volume

from Single Loop Detector Using Lane-to-Lane Correlation. Preprint CD-ROM

from the 82nd Annual Meeting of Transportation Research Board.

Lai, A.H.S., G.S.K. Fung, and N.H.C. Yung. 2001. Vehicle Type Classification from

Visual-Based Dimension Estimation. Proceedings of the IEEE Intelligent

Transportation Systems Conference, Oakland, CA, pp. 201-206.

Lo, B.P.L., S. Thiemjarus, and G. Yang. 2003. Adaptive Bayesian Networks for Video

Processing. Proceedings of the 2003 International Conference on Image

Processing, Vol. 1, pp. 889-892.

Martin, P.T., G. Dharmavaram, and A. Stevanovic. 2004. Evaluation of UDOT’s Video

Detection Systems: System’s Performance in Various Test Conditions. Report No:

UT-04.14. Salt Lake City, Utah.

Michalopoulos, P.G. 1991. Vehicle Detection Video Through Image Processing: The

Autoscope System. IEEE Transactions on Vehicular Technology, Vol. 40, No. 1.

pp. 21-29.

Page 125: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

111

Microsoft Inc. 2002. Microsoft DirectX Web site. Accessed on Oct. 16, 2005 at

http://www.microsoft.com/windows/directx/default.aspx.

Mikhalkin, B., H. J. Payne, and L. Isaksen. 1972. Estimation of speed from presence

detectors, Highway Research Record 388, HRB, National Research Council,

Washington, D.C., 1972, pp. 73-83.

National Highway Traffic Safety Administration (NHTSA). 2004. Traffic Safety Facts

2003: A Compilation of Motor Vehicle Crash Data from the Fatality Analysis

Reporting System and the General Estimates System. US Department of

Transportation, National Highway Traffic Safety Administration, Washington,

D.C.

Otsu, N. 1979. A Threshold Selection Method from Gray-Level Histograms. IEEE

Transactions on Systems, Man and Cybernetics, Vol. 9, No. 1, pp. 62-66.

Peters, A., S. von Klot, M. Heier, I. Trentinaglia, A. Hörmann, H.E. Wichmann, and H.

Löwel. 2004. Exposure to Traffic and the Onset of Myocardial Infarction. The

New England Journal of Medicine, Vol. 351, No. 17, pp. 1721-1730.

Petty, K.F., P. Bickel, M. Ostland, J. Rice, F. Schoenberg, J. Jiang, and Y. Ritov. 1998.

Accurate Estimation of Travel Times from Single-Loop Detectors. Transportation

Research, Part A, Vol. 32, No. 1, 1-17.

Prati, A., I. Mikic, M.M. Trivedi, and R. Cucchiara. 2003. Detecting Moving Shadows:

Algorithms and Evaluation. IEEE Transactions on Pattern Analysis and Machine

Intelligence, Vol. 25, No. 7, pp. 918-923.

Pushkar, A., F. L. Hall, and J.A. Acha-Daza. 1994. Estimation of speeds from single-loop

freeway flow and occupancy data using cusp catastrophe theory model.

Transportation Research Record, No. 1457, 149-157, Washington, D.C.

Rad, R. and M. Jamzad. 2005. Real Time Classification and Tracking of Multiple

Vehicles in Highways. Pattern Recognition Letters, Vol. 26, No. 10, pp. 1597-

1607.

Page 126: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

112

Rhodes, A., D.M. Bullock, J. Sturdevant, Z. Clark, and D.G. Candey, Jr. 2005.

Evaluation of Stop Bar Video Detection Accuracy at Signalized Intersections.

Proceedings of the 84th Annual Meeting of Transportation Research Board (CD-

Rom), Washington D.C.

Scanlan, J.M., D.M. Chabries, and R.W. Christiansen. 1990. A Shadow Detection and

Removal Algorithm for 2-D Images. Proceedings of the International Conference

on Acoustics, Speech, and Signal Processing, Vol. 4, pp. 2057-2060.

Shapiro, L. G. and G. C. Stockman. 2001. Computer Vision. Prentice Hall, New Jersey,

pp. 289-290.

Sun, C., and S.G. Ritchie. 1999. Individual Vehicle Speed Estimation Using Single Loop

Inductive Waveforms. Journal of Transportation Engineering, Vol. 125, No. 6,

531 - 538.

Tian, Z.Z., M.D. Kyte, and C.J. Messer. 2002. Parallax Error in Video-Image Systems.

Journal of Transportation Engineering, Vol. 128 (3), pp. 218-223.

Transportation Research Board (TRB). 2000. Highway Capacity Manual. TRB, National

Research Council, Washington, D.C.

University of Washington Intelligent Transportation Systems (UW ITS) Research

Program. 1997. loop_client: Real-Time Freeway Sensor Information Over The

Internet. Accessed online at

http://www.its.washington.edu/software/loop_cli.html.

Wang, J.M., Y.C. Chung, C.L. Chang, and S.W. Chen. 2004. Shadow Detection and

Removal for Traffic Images. IEEE International Conference on Networking,

Sensing and Control, Vol. 1, pp. 649-654.

Wang, Y., and N.L. Nihan. 2000. Freeway traffic speed estimation using single loop

outputs, Transportation Research Record, No. 1727, 120-126, TRB, National

Research Council, Washington, D.C.

Page 127: Improving Truck and Speed Data Using Paired …depts.washington.edu/trac/bulkdisk/pdf/656.1.pdfImproving Truck and Speed Data Using Paired Video and Single-Loop Sensors 6. PERFORMING

113

Wang, Y. and N.L. Nihan. 2003. Can Single-Loop Detectors Do the Work of Dual-Loop

Detectors? ASCE Journal of Transportation Engineering, 129(2), pp. 169-176.

Washington State Department of Transportation (WSDOT). 2002. 2001 Annual Traffic

Report, Seattle, Washington.

Washington State Department of Transportation (WSDOT). “Seattle Area Traffic

Frequently Asked Questions.” Website at

http://www.wsdot.wa.gov/Traffic/seattle/questions/, Accessed Feb. 20, 2006.

Weber, A. N. 1999. Verification of Radar Vehicle Detection Equipment Study SD98-15

Final Report. South Dakota Department of Transportation, 1999.

Xu, L., J.L. Landabaso, and M. Pardas. 2005. Shadow Removal with Blob-Based

Morphological Reconstruction for Error Correction. Proceedings of the IEEE

International Conference on Acoustics, Speech, and Signal Processing, Vol. 2,

pp. 729-732.

Zhang, X., Y. Wang, and N.L. Nihan. 2003. Investigating Dual-Loop Errors Using Video

Ground-Truth Data. Proceedings of the 13th Annual Meeting of ITS America (CD-

Rom). Paper 158.

Zheng, J., Y. Wang, N.L. Nihan, and M.E. Hallenbeck. 2006. Extracting Roadway

Background Image: a Mode-Based Approach. Transportation Research Record.

In Press.