Pattern Recognition Presentation

39
Illumination Assessment for Vision-Based Traffic Monitoring By SHRUTHI KOMAL GUDIPATI

Transcript of Pattern Recognition Presentation

Page 1: Pattern Recognition Presentation

Illumination Assessment for Vision-Based Traffic Monitoring

BySHRUTHI KOMAL GUDIPATI

Page 2: Pattern Recognition Presentation

Outline

Introduction PVS system design & concepts Assessing lighting Assessing contrast Assessing shadow presence Conclusion

Page 3: Pattern Recognition Presentation

Introduction Vision systems in traffic domain operates

autonomously over varying environmental conditions

Uses different parameter values or algorithms depending on these conditions

Parameters depends on ambient conditions on camera images

Page 4: Pattern Recognition Presentation

PVS system Commercial real-time vision system for traffic

monitoring that detects, tracks, and counts vehicles

Uses large volume of video data obtained from 25 different scenes

Switches between different parameter values and algorithms depending on scene illumination aspects

Page 5: Pattern Recognition Presentation

Aspects of Scene illumination

Is the scene well-lit? Is vehicle bodies visible? In poorly lit scenes, Are only vehicle

lights visible ?

Page 6: Pattern Recognition Presentation

Aspects of Scene illumination

Page 7: Pattern Recognition Presentation

Aspects of Scene illumination

Page 8: Pattern Recognition Presentation

Aspects of Scene Iillumination

Is the contrast sharp enough? Ex:

Is visibility sufficient for reliable detection ?

Is visibility sufficient or too diminished ?

Fog, Dust or Snow

Page 9: Pattern Recognition Presentation

Aspects of Scene illumination

Are vehicles in the scene casting shadows?

Page 10: Pattern Recognition Presentation

PVS System Design

Processes frames at 30 Hz

Process images simultaneously up to 4 cameras

Compact and fits in 3U VME board

Page 11: Pattern Recognition Presentation

3U VME Board

Page 12: Pattern Recognition Presentation

PVS system hardware

Two Texas Instruments TMS320C31 DSP chips

A Sensar pyramid chip Custom ALU implemented using a

Xilinx chip

Page 13: Pattern Recognition Presentation

Operation principle Maintains a reference image that contains

the scene as it would appear if no vehicles were present

Each incoming frame is compared to the reference

Pixels where there are significant differences are grouped together into "fragments" by the detection algorithm

These fragments are grouped and tracked from frame to frame using a predictive filter

Page 14: Pattern Recognition Presentation

One dimensional strip representation

Reduces the 2D image of each lane to a 1D "strip“

Integration operation that sums two pixel-wise measures across the portion of each image row that is spanned by the lane, resulting in a brightness and energy measurement for each row

Integration operation is performed by the ALU, which takes as input a bit-mask identifying each lane

Page 15: Pattern Recognition Presentation

2D -> 1D transformation

Page 16: Pattern Recognition Presentation

Strip measurements Two measurements, brightness and energy,

are computed for each strip element y of each strip s

Brightness B(s,y) = Σ (pixels inWy)

Energy E(s,y) = [Σ (absolute difference between every two

adjacent pixels in W) ] / ااWyاا

Page 17: Pattern Recognition Presentation

Reference strips

Brightness and Energy measurements gathered from a strip over time are used to construct a reference strip

For scenes in which traffic is flowing freely, reference strip can be constructed by IIR filtering

IIR filtering doesn’t work in stop-and-go or very crowded areas

Page 18: Pattern Recognition Presentation

Strip element classification Classify each strip element on the

current strip as background or non-background

Done by computing the brightness and energy difference measures ΔB(y) = B(I,y) - B (R,y) - (o اا W(y) اا ) ΔE(y) = اE(I,y) - E(R,y) ا

Page 19: Pattern Recognition Presentation

Classification as Background or non-Background

Page 20: Pattern Recognition Presentation

Strip element classification

Each strip element that is classified as non-background is further classified as "bright" or "dark“

Depends on whether its brightness is greater or less than the brightness of the corresponding reference strip element

Page 21: Pattern Recognition Presentation

Illumination Assessment

All frames grabbed in a two-minute interval, all strip elements that both have been identified as non- background and have significant dt are used to update various statistical measures

Values of these measures are used to assess the lighting, contrast, and shadows

Page 22: Pattern Recognition Presentation

Fragment Detection

Groups non-background strip elements into symbolic "vehicle fragments“

To prevent false positive vehicle detections, the system avoids detecting the illumination artifacts as vehicle fragments

Page 23: Pattern Recognition Presentation

Fragment Detection

Uses three different detection techniques, depending on the nature of the scene illumination Detection in well-lit scenes without vehicle

shadows Detection in well-lit scenes with vehicle shadows Detection in poorly-lit scenes

Page 24: Pattern Recognition Presentation

Fragment Detection

Detection in well-lit scenes without vehicle shadows

Scene as well-lit if the entire vehicle body is Visible

Scenes are termed poorly-lit if the only clearly-visible vehicle components are the headlights or taillights

Page 25: Pattern Recognition Presentation

Fragment Detection

Detection in well-lit scenes with vehicle shadows

Well-lit scenes where vehicles are casting shadows, the detection process must be modified so that non-background strip elements due to shadows are not grouped into vehicle fragments

Uses stereo or motion cues to infer height

Page 26: Pattern Recognition Presentation

Fragment Detection

Detection in poorly-lit scenes

Where only vehicle lights are visible, fragment extraction via connected components is prone to false positives due to headlight reflections

Fragments are extracted by identifying compact bright regions of non-background strip elements around local brightness maxima

Page 27: Pattern Recognition Presentation

Fragment tracking & grouping

After the vehicle fragments have been extracted, they are passed to the Tracker module which tracks over time and groups them into objects

Page 28: Pattern Recognition Presentation

Assessing lighting Measures used for assessing whether the scene is

well-lit, i.e. whether the entire body of most vehicles will be visible Ndark + Nbright = total number of non-

background pixels that were detected Pdark = Ndark/(Ndark+Nbright)

If the scene is poorly-lit, the background image will be quite dark, and it will be difficult to detect any pixel with a dark surface color. Under this condition ndark will be small, and hence Pdark will be small

Page 29: Pattern Recognition Presentation
Page 30: Pattern Recognition Presentation

Assessing contrast

Two typical causes of insufficient contrast -- fog or raindrops

Contrast can be measured using the energy difference measure ΔE(y)

In low-contrast scenes that occur during the day, vehicles will usually appear as objects darker than the haze, which often appears rather bright

In low-contrast scenes occurring at night, no dark regions will be detectable

Measure ΔEbright and ΔEdark

Page 31: Pattern Recognition Presentation
Page 32: Pattern Recognition Presentation

Assessing shadow presence

Scenes that are well-lit can be decomposed into two sub-classes Shadows Non - Shadows

Contrast of a "bright" portion of a vehicle against the road surface would be less than that of a "dark" portion

Page 33: Pattern Recognition Presentation
Page 34: Pattern Recognition Presentation

Assessing shadow presence

Using k4 = 1.2, this method has been found to work well

Sometimes, when there are very faint shadows, it does classify the scene as having no shadows

Fails when the background is not a road For example, in some scenes a camera is

looking at the road primarily from the side, and the vehicles occlude either objects (e.g. trees) or the sky as they move across the scene

Page 35: Pattern Recognition Presentation

Illumination Assessment module

Three methods for assessing lighting, contrast, and shadows are applied sequentially

Page 36: Pattern Recognition Presentation

Illumination Assessment module

Page 37: Pattern Recognition Presentation

Conclusions

During Strip representation, transformation from 2D -> 1D is not clearly explained

In strip classification, the global offset “o” is mentioned to have been measured by a different process. The paper doesn’t mention/explain anything about the process

The paper mentions that the deployment results were satisfactory but it doesn’t provide any statistical data to support the claim

Page 38: Pattern Recognition Presentation

References

Wixson, L.B., Hanna, K., Mishra, D., Improved Illumination Assessment for Vision-Based Traffic Monitoring, VS98(Image Processing for Visual Surveillance)

Hanna, K. L. Wixso and D. Mishra , Illumination Assessment for Vision-Based Traffic Monitoring, ICPR '96: Proceedings of the International Conference on Pattern Recognition

Femer et al. 941 N.J. Ferrier, S.M. Rowe, A. Blake, "Real-Time Traffic Monitoring," in Proceedings of the IEEE Workshop on Applications of Computer Vision, pages 81-88, 1994

Kilger 911 M. Kilger, "A Shadow Handler in a Video-based Real-time Traffic Monitoring System", in Proceedings of the IEEE Workshop on Applications of Computer Vision, pages 11-18, 1992

Page 39: Pattern Recognition Presentation

Questions ?