Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

40
Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006 http://compsci.snc.edu/cs460/denoad/

Transcript of Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Page 1: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Visual Lane Detection and Analysis

Adam D. DeNobleComputer Science Senior

ProjectSpring 2006

http://compsci.snc.edu/cs460/denoad/

Page 2: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Aims of this Project To develop novel visual

lane detection and analysis algorithms that are efficient enough to function in a real-time robotic environment

Use the algorithms to reliably provide the exclusive guidance for an autonomous robotic vehicle traveling along a roadway

Page 3: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

What I Started With The Zoom lawn tractor, modified so that an

onboard computer could set the speed and direction of the left wheel and right wheel within a certain range of values based on user or program control

The capability to rapidly capture a full color image from a vehicle mounted digital camera

An idea that most snapshots of the road have enough common characteristics that it would be possible to come up with a general solution for finding the road within the image

Page 4: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Visual What? Visual lane detection is the process of

automatically determining the positions of the left and right boundary lines of a lane, within a digital image acquired from a camera mounted on a vehicle in the lane

Lane analysis is the process of deriving useful conclusions from the results of the lane detection phase, such as the current position and trajectory of the vehicle

Page 5: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Visual Why? There are numerous applications to this

exciting, somewhat new field of computer vision research, many involving autonomous robotic vehicles, as is demonstrated in this project

Non-robotic applications also exist, such as “the driver is falling asleep and is leaving the lane” warning devices for traditional automobiles

Page 6: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Can You Identify the Road in this Picture?

Silly question, right?

Wrong! Easy for a

human - the eyes and brain are amazing!

Not so trivial for a computer…

Page 7: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Not Trivial, but Possible! Big deal, right?

Just look for the pavement!

Well, computers only know about 0’s and 1’s, not pavement – but they can be “taught” about other things…

Page 8: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Looks Like Cobblestone to Me!

Not pavement like the last image, but the same software using the same configuration found the lane…

Page 9: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

So What is the Common Denominator?

Every lane has a left and a right boundary line

It seems as though one of them is always the dominant vertically inclined line in an image

Page 10: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Dominant What? There’s a lot of

vertically inclined lines in that picture!

Clearly though, one of them is the longest and most contiguous – the dominant line

Page 11: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

What About the Other Line? Sure, the dominant line kind of sticks out like

a sore thumb in the edge map of the image But then how to identify the conjugate line? If the dominant line is the left side boundary,

then the right side boundary is called it’s conjugate line and visa-versa

There can be lots of vertically inclined lines in an image that are smaller than the dominant line but larger than the actual conjugate line, which poses a bit of a challenge in automatically finding the conjugate line

Page 12: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Conjugate Line The primary defining

characteristic of the conjugate line is that its slope will be of the opposite sign of the dominant line’s slope

This is a fundamental characteristic of a lane that must always be true

Dominant line

Slope same sign as dominant line so disqualified

Opposite sign slope but too small so disqualified

Longest, most contiguous line with opposite sign slope so this is conjugate line

Page 13: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

A Technique for Filtering out Anomalous Conjugate Lines

The intersection of of the conjugate pair must be at or above the vertical “chop height”

Yellow line is bigger but fails intersection test

Page 14: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Chop Height??? Gray line is the

vertical starting point used when searching for lines (top down traversal)

It helps to avoid processing the “noise” of the horizon in an image

Page 15: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Horizon Noise The road lines

are never present in the horizon anyway

So, the algorithm works better by simply ignoring the horizon area

Page 16: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Recap of Line Definitions The dominant vertically-inclined

line within an image is considered to be either the left or right side road boundary

Its conjugate (opposite) boundary line is the longest, most contiguous line with a slope of opposite sign that passes the “intersection test”

Page 17: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

How to Find the Lines? As is the case in much of computer science,

such as comparison based sorting, the old rule of thumb “you must look at everything” applies here as well

It is necessary to inspect as many potential lines as possible that fall within a certain logical range of possibilities (slopes, origin points, etc.)

So, the range of possible lines is generated and then each line is traversed to determine its size (length) and degree of continuity

Page 18: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Generation of Potential Lines

Equations for potential left side and right side lines are generated from each x origin along the chop line downward in fan shaped groups

Low line density show for clarity

Page 19: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Analytical Geometry Anyone? Generating the equations for a fan of lines

is done based upon slope values In order to achieve equidistant spacing

between lines within a fan, an equation based upon a harmonic series is used to determine the slope of each line in the fan

A pre-set minimum slope and maximum slope for each fan is used to generate the correct harmonic equation

Page 20: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Scoring the Potential Lines In order to identify the dominant line and

then to choose the most probable conjugate line, each potential line needs to be scored to determine how long it is and how contiguous it is

Obviously most of the potential lines do not actually exist within an image

In addition to valid lines, there will be many “tiny” lines that receive a low score as a result of “noise” in the edge map of the image

Page 21: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Scoring a Line Artificially low density

(for clarity) example of a typical edge map of a road image

Black pixels represent edges detected from the source image by an edge detection algorithm such as the Sobel algorithm

Page 22: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

What Does the Score Mean?

Dominant line and its conjugate line shown

19 edge pixels fall on the dominant line

9 edge pixels fall on the conjugate line

Standardized scores are defined as the product of the length of a line and the percentage of possible pixels that actually fall on it

Page 23: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Traversing a Line When scoring a line, it is

traversed from top to bottom and each time a pixel is checked, if it is black (edge), the score of the line is incremented

In order to consistently score lines of different slope, an x step and y step value is computed for each line

Total ΔY R

Total ΔX L

Total ΔX R

Total ΔY L

X step = total ΔX / length of line

Y step = total ΔY / length of line

Page 24: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Traversing a Line Pixel checking begins at

the origin point and the X and Y coordinates are incremented by the Xstep value and Ystep values for the next check all the way until the end of the line within the image

Also, if a substantial gap is found in a line (ex. no black pixels found in 8 sequential checks) the line is given up on

(x0 + xstep, y0 + ystep)

(x0 + 3(xstep), y0 + 3(ystep))

(x0 + 4(xstep), y0 + 4(ystep))

(x0 + 2(xstep), y0 + 2(ystep))

(x0, y0)

Page 25: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Lane Detection Recap Equations for all logically possible lane

boundary lines are generated Every line is traversed and scored The highest scoring line is the dominant line The highest scoring line with a slope of

opposite sign of the slope of the dominant line that passes the “intersection test” is the conjugate line

At this point, the positions of the left and right lane boundary lines are known with a high degree of certainty in the average case

Page 26: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Lane Analysis So now we know

where the left and right side lane boundary lines are within the image

How does that help us?

That information alone is fairly useless by itself, but can used to derive some very important information!

Page 27: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Current Vehicle Trajectory If the vehicle were

pointing straight down the road (0˚ trajectory) the vertical line that passes through the intersection of the left and right boundary lines would be centered in the image

The vehicle trajectory is defined as (90 * ((i - c)/(w – c)))˚w0 c i

Page 28: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Current Horizontal Position

Assume that if the vehicle were positioned in the center of the lane, its horizontal position h is equal to 0, if it is on the far left, -90, the far right, 90

Then the current horizontal position is defined as

h = -90 * ((x2 – i) – (i – x1)) / (x2 – x1)

So, in this example, the horizontal position is moderately negative (vehicle left of center)

x1 x2i

Page 29: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Getting Back on Track Chances are that the vehicle is not

perfectly centered in the lane and is not pointing perfectly straight ahead,especially in a robotic application where its all about dynamic compensation

So, now that we know where we are and where we are pointing as well as where we want to be and where we should be pointing, we can really do some “magic”

Page 30: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Heading Correction Computation

The black vehicle represents our current status and the red vehicle out desired status

So, since we are able to make a maximum turn of 90 degrees in either direction (the Zoom robot uses skid-steering), the opposite sign of the horizontal position value we calculated previously becomes our base heading correction (to be centered in the road)

Since we might be already pointing in an unfavorable, neutral or favorable direction to facilitate or resist that correction, we add the horizontal correction value and the trajectory correction value (opposite sign of current trajectory) together to generate an “overall” correction value, which may have a magnitude > 90, in which case it is truncated to a magnitude of 90

Page 31: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Lane Analysis Recap Once the positions of the left and right

side lane boundary lines are known, the current horizontal position within the lane and trajectory of the vehicle can be calculated

Based on that information, a heading correction between –90 and 90 degrees can be computed, indicating the direction the vehicle should turn at that moment in order to move towards being centered in the lane and pointing straight ahead

Page 32: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Software Development - Lab Mode Shell

Page 33: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Software Development The lab mode shell allows for files of

images acquired from a vehicle mounted camera to be processed offline

This is where most the “work” takes place The lane detection and analysis

algorithms are packaged in a Visual Basic module file that is shared between the lab mode shell and the robot mode shell, so that updated versions are always instantly ready for live testing

Page 34: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Robot Mode Shell

Page 35: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Efficiency In order to provide reliable navigation for an

autonomous robotic vehicle, the lane detection and analysis algorithms need to be able to process images as quickly as possible

The faster images can be accurately processed, the faster the vehicle can safely travel on vision guidance

Current versions of this software are processing > 10 frames per second on average, more than adequate for the maximum speed of the Zoom robot

Page 36: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Efficiency One of the most processor intensive

portions of the algorithm is finding the next highest scoring conjugate lines when many potential lines are rejected by the intersection test

This was alleviated by implementing a max-heap to store the arrays of potential line information

Now, when the next best line is needed, all that is necessary is speedy O(log n) call to ExtractMax()

Page 37: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Does it ALWAYS Work? It is likely that there will be some image

frames that are simply too noisy or otherwise unclear that the algorithm cannot find the road

However, thanks to the rapid frame rate and dynamic frame-by-frame heading correction, the implications of these anomalies are minimal in reasonable conditions

Page 38: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Results This project has always been promising in

theory and lab mode, but live testing in robot mode on the Zoom has been a constant humbling reminder of the challenges posed by an unpredictable environment and robot

All the hard work has paid off, since the latest version of the lane detection and analysis software has recently guided the Zoom robot for 1 mile on a rural road, at maximum speed, without faltering!

Page 39: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Special Thanks To… The CS faculty for their support of my work on this

project, especially Professor Blahnik, who probably spent at least as many hours in connection with this project as I did, maintaining and improving the non-vision based aspects of the Zoom robot

The Ariens Corporation for their generous donation of the Zoom tractor

Corey Bucher from Ariens for taking time out of his day to be with us today

All of the other students who have previously worked with the Zoom to bring it to the point where projects like this are a possibility

Page 40: Visual Lane Detection and Analysis Adam D. DeNoble Computer Science Senior Project Spring 2006

Look Ma, No Hands!!!