1 Introduction to Robotics 4. Mathematics of sensor processing.

79
1 Introduction to Robotics 4. Mathematics of sensor processing.
  • date post

    15-Jan-2016
  • Category

    Documents

  • view

    217
  • download

    0

Transcript of 1 Introduction to Robotics 4. Mathematics of sensor processing.

Page 1: 1 Introduction to Robotics 4. Mathematics of sensor processing.

1

Introduction to Robotics

4. Mathematics of sensor processing.

Page 2: 1 Introduction to Robotics 4. Mathematics of sensor processing.

2

Examples

• Location– Dead Reckoning.

• Odometry using potentiometers or encoders.

• Steering : Differential, Ackerman

– Inertial Navigation Systems (INS).• Optical Gyro.

• Resonant Fiber Optic Gyro.

• Ranging– Triangulation

– MIT Near IR Ranging

– Time of flight

Page 3: 1 Introduction to Robotics 4. Mathematics of sensor processing.

3

• Low cost rotational displacement sensors in applications of:– low speed

– medium accuracy

– not involving continuous rotation

Potentiometers or pots

• Errors due to – poor reliability due to dirt

– frictional loading impact of the shaft

– electrical noise, etc.

• Use has fallen off in favor of versatile incremental optical encoders.

Page 4: 1 Introduction to Robotics 4. Mathematics of sensor processing.

4

Dead Reckoning

• Definition: is a simple mathematical procedure for determining the present location of a vessel (vehicle) by advancing some previous position through known course and velocity information.

• Most simplistic implementation is termed odometry.

• Odometry sensors:

* potentiometers

* encoders: brush, optical, magnetic, capacity, inductive

Page 5: 1 Introduction to Robotics 4. Mathematics of sensor processing.

5

Introduction to Odometry

Given a two wheeled robot, odometry estimates position and orientation from left and right wheel velocities as a function of time.

B = the wheel separation

Page 6: 1 Introduction to Robotics 4. Mathematics of sensor processing.

6

Differential Steering

• Two individually controlled drive wheels• Enables the robot to spin in place• Robot displacement D and velocity V along the path of travel are

Displacement and velocity of the left wheel

Displacement and velocity of the right wheel

,

,

= circumference of circle

traveled by left wheel

Page 7: 1 Introduction to Robotics 4. Mathematics of sensor processing.

7

Differential Steering

Solving for :

Similarly,

yields:

Solving for b=

d in the denominator is a significant source of error since, due to the uncertainties associated with the effective point of contact of the tires.

Page 8: 1 Introduction to Robotics 4. Mathematics of sensor processing.

8

Over an infinitesimal time increment, the speed of the wheels can be assumed constant => path has constant radius of curvature:

Page 9: 1 Introduction to Robotics 4. Mathematics of sensor processing.

9

Differential Steering. Drive controller

= wheel rotation

= effective left wheel radius

= number of counts left encoder

= number of counts per wheel revolution

And a similar relation for the right wheel.The drive controller will attempt to make the robot travel a straight line by ensuring and are the same.

Not an accurate method, since effective wheel radius is a function of the compliance of the tire and the weight (empirical values, tire compliance is function of wheel rotation.

Page 10: 1 Introduction to Robotics 4. Mathematics of sensor processing.

10

Differential Steering. Other reasons for inaccuracies.

In climbing over a step discontinuity of height h, the wheel rotates and so the perceived distance differs from the actual distance traveled

This displacement differential between left and right drive wheels result in an instantaneous heading change

Floor slippage: this problem is especially noticeable in exterior implementations known as skid steering. Routinely implemented in bulldozers and armored vehicles.

Skid steering is employed only in teleoperated vehicles.

Page 11: 1 Introduction to Robotics 4. Mathematics of sensor processing.

11

Ackerman Steering.

The method of choice for outdoor autonomous vehicles.

• Used in order to provide fairly accurate dead-reckoning solution, while supporting the traction and ground clearance needs of all-terrain operations.

• Is designed to ensure that when turning, the inside wheel is rotated to a slightly sharper angle than the outside wheel, thereby eliminating geometrically induced tire slippage.

Ackerman equation:

l

do

θi

θ cotcoto,i = relative steering angle of inner/ outer wheel

l = longitudinal wheel separation

d = lateral wheel separation

Examples include:

•HMMWV based Teleoperated Vehicle (US Army) Program.

•MDARS (Mobile Detection Assessment and Response System) Exterior - autonomous patrol vehicle.

Page 12: 1 Introduction to Robotics 4. Mathematics of sensor processing.

12

x = distance from inside wheel to center of rotation

l

dxl

x

o

i

cot

cot

xdl

i

i

io

o

o

Ackerman Steering.

Page 13: 1 Introduction to Robotics 4. Mathematics of sensor processing.

13

Inertial Navigation

• Continuous sensing of acceleration in each of 3D axes, and integrating over time to derive velocity and position.

• Implementations are demanding from the standpoint of minimizing the various error sources.

• High quality navigational systems have a typical drift of 1nm/h and only a few years ago were used to cost $50K-$70K.– High end systems which perform better than 0.1% of the distance

traveled and they used to cost $100K - $200K.

– Today, relatively reliable equipment to be used for UGV navigation costs starting $5K.

• Low cost fiber optic gyros and solid state accelerometers were developed for INS.

Page 14: 1 Introduction to Robotics 4. Mathematics of sensor processing.

14

Gyroscopes

Mechanical gyroscopes operate by sensing the change in direction of some actively sustained angular or linear momentum.

A typical two-axis flywheel gyroscope senses a change in direction of the angular momentum associated with a spinning motor.

Page 15: 1 Introduction to Robotics 4. Mathematics of sensor processing.

15

Optical Gyroscopes

• Principle first discussed by Sagnac (1913).• First ring laser gyro (1986) used He-Ne laser.• Fiber optic gyros (1993) installed in Japanese

automobiles in the 90s.• The basic device:

– two laser beams traveling in opposite directions (i.e. counter-propagating) around a closed loop path.

Standing wave created by counter-propagating light beams. Schulz-DuBois idealization model

– constructing and destructive interference patterns * can be formed by splitting off and mixing a portion of the two beams.* used to determine the rate and direction of rotation of the device.

Page 16: 1 Introduction to Robotics 4. Mathematics of sensor processing.

16

Active Ring-Laser Gyro

• Introduces light into the doughnut by filling the cavity with an active lasing medium.

• Measures the change in path length L as function of the angular velocity of rotation , radius of the circular beam path r and speed of light c.

L = 4r 2

c Sagnac effect

• For lasing to occur in a resonant cavity, the round trip beam path must precisely equal in length to an integral number of wavelengths at the resonant frequency.

•The frequencies of the two counter-propagating waves must change, as only oscillations with wavelength satisfying the resonance condition can be sustained in the cavity.

Page 17: 1 Introduction to Robotics 4. Mathematics of sensor processing.

17

Active Ring-Laser Gyro

• For an arbitrary cavity geometry with an area A enclosed by the loop beam path and perimeter of the beam path L:

f = 4A

L

• The fiber glass forms an internally reflective waveguide for optical energy.

• Multiple turns of fiber may be an implementation of doughnut shape cavity and result with path change due to Sagnac effect, essentially multiplied by N, number of turns.

Page 18: 1 Introduction to Robotics 4. Mathematics of sensor processing.

18

Open Loop Interferometer Fiber Optic GyroIFOG

n= reflective index

Speed of light in medium

As long as the entry angle is less than a critical angle, the ray is guided down the fiber virtually without loss.

NA = is the numerical aperture of the fiber

We need a single mode fiber, so only the counter-propagating waves can exist. But in such a fiber light may randomly change polarization states. So, we need a special polarization-maintaining fiber.

con = index of reflection of glass core

cln = index of reflection of cladding

nn2

cl

2

cocsinθ

csinθ

Page 19: 1 Introduction to Robotics 4. Mathematics of sensor processing.

19

Open Loop IFOG

Is the number of fringes of phase shift due to gyro rotation

Advantages:

• reduced manufacturing costs

• quick start-up

• good sensitivity

Disadvantages:

• long length of optical fiber required

• limited dynamic range in comparison with active ring-laser gyros

• scale factor variations

Used in automobile navigation, pitch and roll indicators, and altitude stabilization.

Page 20: 1 Introduction to Robotics 4. Mathematics of sensor processing.

20

Resonant Fiber - Optic Gyros.• Evolved as a solid state derivative of the passive ring gyro, which makes use of a laser source external to the ring cavity.

• A passive resonant cavity is formed from a multi-turn closed loop of optical fiber.

• Advantages: high reliability, long life, quick start-up, light weight, up to 100 times less fiber.

• Input coupler injects frequency modulated light in both directions.

• In the absence of loop rotation, maximum coupling occurs at the resonant frequency.

• If the loop rotates the resonant frequency must shift.

f =D

n

Page 21: 1 Introduction to Robotics 4. Mathematics of sensor processing.

21

Ranging

• Distance measurement techniques:– triangulation

– ToF: time of flight (pulsed)

– PhS: phase shift measurement (CW: continuous wave)

– FM: frequency modulation (CW)

– interferometry

• Non contact ranging sensors:– active:

• Radar - ToF, PhS, FM

• Sonar - ToF speed of sound slow in water.

• Lidar - laser based ToF, PhS

– passive

Page 22: 1 Introduction to Robotics 4. Mathematics of sensor processing.

22

GPS: Navstar Global Positioning System

• 24 satellite based system, orbiting the earth every 12h at an altitude of 10,900nm.

• 4 satellites located in each of 6 planes inclining 55deg. with respect to earth’s equator.

• Absolute 3D location of any GPS receiver is determined by trilateration techniques based on time of flight for uniquely coded spread - spectrum radio signals transmitted by the satellites.

• Problems:– time synchronization and the theory of relativity.

– precise real time location of satellites.

– accurate measurement of signal propagation time.

– sufficient signal to noise ratio

Page 23: 1 Introduction to Robotics 4. Mathematics of sensor processing.

23

GPS: Navstar Global Positioning System• spread - spectrum technique: each satellite transmits a

periodic pseudo random code on two different L band frequencies (1575.42 and 1227.6 MHz)

• Solutions:– time synchronization. atomic clocks– precise real time location of satellites. individual satellite

clocks are monitored by dedicated ground tracking stations and continuously advised of their measurement offsets from official GPS time.

– accurate measurement of signal propagation time. a pseudo-random code is modulated onto the carrier frequencies. An identical code is generated at the receiver on the ground. The time shift is calculated from the comparison using the forth satellite.

Page 24: 1 Introduction to Robotics 4. Mathematics of sensor processing.

24

GPS: Navstar Global Positioning System

• The accuracy of civilian GPS is degraded 300m, but there are quite a few commercial products, which significantly enhance the above mentioned accuracy.

• The Differential GPS (DGPS) concept is based on the existence of a second GPS receiver at a precisely surveyed location.

• We assume that the same correction apply to both locations.

• Position error may be reduced well under 10m.

• Some other up-to-date commercial products claim on accuracy of several cm.

Page 25: 1 Introduction to Robotics 4. Mathematics of sensor processing.

25

Compact Outdoor Multipurpose POSE (Position and Orientation Estimation) Assessment Sensing System (COMPASS)

• COMPASS is a flexible suite of sensors and software integrated for GPS and INS navigation.

• COMPASS consists of a high-accuracy, 12-channel, differential Global Positioning System (GPS) with an integrated Inertial Navigation System (INS) and Land Navigation System (LNS).

• This GPS/INS/LNS is being integrated with numerous autonomous robotic vehicles by Omnitech for military and commercial applications.

• COMPASS allows semiautonomous

operation with multiple configurations

available.

Page 26: 1 Introduction to Robotics 4. Mathematics of sensor processing.

26

• Active: employing– a laser source illuminating the target object and

– a CCD camera.

Triangulation

Calibration targets are placed at known distances z1 and z2.

Point- source illumination of the image effectively eliminates the correspondence problem.

Page 27: 1 Introduction to Robotics 4. Mathematics of sensor processing.

27

Triangulation by Stereo vision Based on the Law of Sines, assuming the measurement is

done between three coplanar points.– Passive: Stereo vision measured angles (,) from two

points (P1,P2) located at known relative distance (A).

Limiting factors:

• reduced accuracy with increasing range.

• angular measurement errors.

• may be performed only in the stereo observation window, because of missing parts/ shadowing between the scenes.

Page 28: 1 Introduction to Robotics 4. Mathematics of sensor processing.

28

• Horopter is the plane of zero disparity.

• Disparity is the displacement of the image as shifted between the two scenes.

• Disparity is inversely proportional with the distance to the object.

Basic steps involved in stereo ranging process:

• a point in the image of one camera must be identified.

• the same point must be located in the image of the other camera.

• the lateral position of both points must be measured with respect to a common reference.

• range Z is then calculated from the disparity in the lateral measurements.

Triangulation by Stereo vision

Page 29: 1 Introduction to Robotics 4. Mathematics of sensor processing.

29

• Correspondence is the procedure to match two images.– Matching is difficult in regions where the intensity of color is

uniform.

– Shadows in only one image.

• Epipolar restriction - reduce the 2D search to a single dimension.

• The epipolar surface is a planedefined by the lens center points L and R and the objectof interest at P.

Triangulation by Stereo vision

Page 30: 1 Introduction to Robotics 4. Mathematics of sensor processing.

30

MIT Near IR Ranging• One dimensional implementation.• Two identical point source LEDs placed a known

distance “d” apart. • The incident light is focused on the target surface.• The emitters are fired in sequence.• The reflected energy is detected by a phototransistor.• Since beam intensity is inversely proportional with the

distance traveled:

Assumption: the surface is perfectly defusing the reflected light (Lambertian surface) and the target is wider than the field of view.

Page 31: 1 Introduction to Robotics 4. Mathematics of sensor processing.

31

Basics of

Machine Vision

Page 32: 1 Introduction to Robotics 4. Mathematics of sensor processing.

32

Vision systems are very complex.

• Focus on techniques for closing the loop in robotic mechanisms.

• How might image processing be used to direct behavior of robotic systems?

– Percept inversion: what must be the world model to produce the sensory stimuli?– Static Reconstruction Architecture– Task A: A stereo pair is used to reconstruct the world geometry

Page 33: 1 Introduction to Robotics 4. Mathematics of sensor processing.

33

Reconstructing the image is usually not the proper solution for robotic control.

– Examples where reconstructing the image is a proper step:

• medical imagery

• construct topological maps

• Perception was considered in isolation.

• Elitism made vision researchers considering mostly their closed community interests.

• Too much energy invested in building World Models.

Page 34: 1 Introduction to Robotics 4. Mathematics of sensor processing.

34

Active Perception Paradigm.

• Task B: a mobile robot must navigate across outdoor terrain.– Many of the details are likely to be irrelevant in this task.– The responsiveness of the robot depends on how precisely it focuses on just the right visual feature set.

Page 35: 1 Introduction to Robotics 4. Mathematics of sensor processing.

35

Perception produces motor control outputs, not representations.

• Action oriented perception.• Expectation based perception.• Focus on attention.• Active perception: agent can use motor control to enhance perceptional processing.

Page 36: 1 Introduction to Robotics 4. Mathematics of sensor processing.

36

Cameras as sensors.

• Information about the incoming light (e.g., intensity, color) is detected by photosensitive elements build from silicon circuits in charge-coupled devices (CCD) placed on the image plane.

• In machine vision, the computer must make sense out of the information it gets on the image plane.

• The lens focus the incoming light.

• Light, scattered from objects in the environment is projected through a lens system on the image plane.

Page 37: 1 Introduction to Robotics 4. Mathematics of sensor processing.

37

Cameras as sensors.

• Only the objects at a particular range of distances from the lens will be in focus. This range of distances is called

the camera's depth of field. • The image plan is subdivided into pixels, typically

arranged in a grid (512x512)

• The projection on the image plan is called image.

• Our goal is to extract information about the world from a 2D projection of the energy stream derived from a complex 3D interaction with the world.

Page 38: 1 Introduction to Robotics 4. Mathematics of sensor processing.

38

Pinhole camera model.

Perspective projection geometry. Mathematically equivalent non-inverting geometry.

Page 39: 1 Introduction to Robotics 4. Mathematics of sensor processing.

39

P

f fa b

l r

1מצלמה 2מצלמה

מציר l נמצא מרחק Pהעצם מציר a ומופיע מרחק 1מצלמה העדשה מציר מצלמה r נמצא מרחק Pהעצם

מציר העדשהb ומופיע מרחק 2

d/l = (d+f)/(l+a)

d/r = (d+f)/(r+b)

d

d = f (l+r)/(a+b)

l+r המרחק בין = המצלמות

b a+b נקרא disparity

disparity נמצא ביחס הפוך ל Pהמרחק לעצם

על ידי שתי מצלמותPמדידת מרחק לעצם

Page 40: 1 Introduction to Robotics 4. Mathematics of sensor processing.

40

A simple example: stereo system encodes depth entirely in terms of disparity.

z = distance to object

LR uu

LR uu = disparity2d = distance between cameras

Page 41: 1 Introduction to Robotics 4. Mathematics of sensor processing.

41

The information needed to reconstruct the 3D geometry includes also

• the kinematical configuration of the camera and

• the offset from image center (optical distortions).

Geometrical parameters for binocular imaging.

Page 42: 1 Introduction to Robotics 4. Mathematics of sensor processing.

42

2dהמרחק בין המצלמות כמתואר בציור.

חשב את המרחקים • מהעצם לכל אחד מהמצלמות

ו- הצג את פתרונך מפורט •

ככל יכולתך.התעלם •מהקינמטיקה של מערכת –

המצלמות (תנודות יחסיות) עוויתיים אופטיים כתלות –

ממרחק העצם ממרכז

.התמונה

a 2 תרגיל מס

R L

.Pשתי מצלמות מכוונות לצלם את אותו עצם

P

Page 43: 1 Introduction to Robotics 4. Mathematics of sensor processing.

44

Edge detection

• The brightness of each pixel in the image is proportional to the amount of light directed toward the camera by the surface patch of the object that projects to that pixel.

• Image of a black and white camera

– collection of 512x512 pixels, with different gray levels (brightness).

– to find an object we have to find its edges: do edge detection.

– we define edges as curves in the image plane across which there is significant change in the brightness.

– the edge detection is performed in two-steps:

* detection of edge segments/ elements called edgels

* aggregation of adgels.– because of noise (all sorts of spurious peaks), we have to do first smoothing.

Page 44: 1 Introduction to Robotics 4. Mathematics of sensor processing.

45

Smoothing: How do we deal with noise?

• Convolution.

– Applies a filter: a mathematical procedure, which finds and eliminated isolated picks.

Convolution, is the operation of computing the weighted integral of one function with respect to another function that has

first been reflected about the origin, andthen variably displaced.

In one continuous dimension, h(t) * i(t) = h(t-) i() d = h() i(t-) d

Graphical Convolution - Discrete Time - Continuous Time

Page 45: 1 Introduction to Robotics 4. Mathematics of sensor processing.

46

How this is done?

• Integral or differential operators acting on pixels are achieved by a matrix of multipliers that is applied to each pixel as is moved across the image.

• They are typically moved from left to right as you would read a book.

Sobel gradient

Sobel Laplacian

Page 46: 1 Introduction to Robotics 4. Mathematics of sensor processing.

47

Examples of Convolution operators as filters at work.

Page 47: 1 Introduction to Robotics 4. Mathematics of sensor processing.

48

Some mathematical background.The Dirac delta function.

Page 48: 1 Introduction to Robotics 4. Mathematics of sensor processing.

49

Fourier Transform

Page 49: 1 Introduction to Robotics 4. Mathematics of sensor processing.

50

Fourier Transform Pairs.

Page 50: 1 Introduction to Robotics 4. Mathematics of sensor processing.

51

The Shift theorem.

Page 51: 1 Introduction to Robotics 4. Mathematics of sensor processing.

52

The convolution theorem.

An important property of the convolution lies in the way in which it maps through the Fourier transform:

Convolution in the spatial domain is equivalent to multiplication in the frequency domain and vice versa.

Convolution operators are essentially spectral filters.

Page 52: 1 Introduction to Robotics 4. Mathematics of sensor processing.

53

The Sampling Theorem.

f(x) is continuous spatial function representing the image function.

g(x) is an infinite sequence of Dirac delta operators.

The product of this two is h(x), the sampled approximation.

Using the convolution theorem, the Fourier transforms:

The frequency spectrum of the sampled image consists of duplicates of the spectrum of the original image distributed at frequency intervals.

Page 53: 1 Introduction to Robotics 4. Mathematics of sensor processing.

54

G(u)*F(u)H(u) dαα)G(u)F(α

dα)αδ(1

α)F(00 x

nu

x n

dα)αδ(α)F(1

0n0 x

nu

x

)(F1

0n0 x

nu

x

Page 54: 1 Introduction to Robotics 4. Mathematics of sensor processing.

55

The Sampling Theorem. Nyquist Theorem.

If the image contains no frequency components greater than one half of the sampling frequency, than the continuous image is faithfully represented by the sampled image.

The Sampling Theorem

An illustrative example:

When replicated spectra interfere, the crosstalk introduces “energy” at relatively high frequencies changing the appearance of the reconstructed image.

Page 55: 1 Introduction to Robotics 4. Mathematics of sensor processing.

56

Early Processing

The convolution operation of a continuous function of a 2D signal:

For discrete functions, the equivalent operation is:

or:

– where, h(x,y) is a new image generated by convoluting the image g(x,y) with the (2n+1)x (2n+1) convolution mask f(i,j).

– for n=1 the operator is 3x3.

– this is convenient because it allows a convoluting process, where the response h(x,y) depends on the neighborhood of support of the original image g(x,y) according to the convolution operator f(x,y)

Page 56: 1 Introduction to Robotics 4. Mathematics of sensor processing.

57

Edge Detection

• locate sharp changes in the intensity function • edges are pixels where brightness changes abruptly. • Calculus describes changes of continuous functions

using derivatives; an image function depends on two variables - partial derivatives.

• A change of the image function can be described by a gradient that points in the direction of the largest growth of the image function.

Page 57: 1 Introduction to Robotics 4. Mathematics of sensor processing.

58

Edge Detector

• An edge is a property attached to an individual pixel and is calculated from the image function behavior in a neighborhood of the pixel.

• It is a vector variable

• magnitude of the gradient

• direction

Page 58: 1 Introduction to Robotics 4. Mathematics of sensor processing.

59

Edge Detectors

– The gradient direction gives the direction of maximal growth of the function, e.g., from black (f(i,j)=0) to white (f(i,j)=255).

Page 59: 1 Introduction to Robotics 4. Mathematics of sensor processing.

60

– Edges are often used in image analysis for finding region boundaries.

– Boundary and its parts (edges) are perpendicular to the direction of the gradient.

Page 60: 1 Introduction to Robotics 4. Mathematics of sensor processing.

61

Gradient

• A digital image is discrete in nature ... derivatives must be approximated by differences

• n is a small integer, usually 1.

Page 61: 1 Introduction to Robotics 4. Mathematics of sensor processing.

62

• Gradient operators can be divided into three categories – I. Operators approximating derivatives of the

image function using differences. • rotationally invariant (e.g., Laplacian) need one

convolution mask only.

• approximating first derivatives use several masks ... the orientation is estimated on the basis of the best matching of several simple patterns.

– II. Operators based on the zero crossings of the image function second derivative.

Page 62: 1 Introduction to Robotics 4. Mathematics of sensor processing.

63

Edge Detection OperatorsLaplace Operator

• Laplace Operator (magnitude only)

• The Laplacian is approximated in digital images by a convolution sum.

• A 3 x 3 mask for 4-neighborhoods and 8-neighborhood

Page 63: 1 Introduction to Robotics 4. Mathematics of sensor processing.

64

Compare

Page 64: 1 Introduction to Robotics 4. Mathematics of sensor processing.

65

Commonly used gradient operators that accomplish smoothing and differentiation simultaneously

- smoothing by averaging the gradient computation over several rows or columns, and

- differentiation by the finite difference operator.

Page 65: 1 Introduction to Robotics 4. Mathematics of sensor processing.

66

Roberts Operator

• So the magnitude of the edge is computed as

• disadvantage of the Roberts operator is its high sensitivity to noise

Page 66: 1 Introduction to Robotics 4. Mathematics of sensor processing.

67

Compare

Page 67: 1 Introduction to Robotics 4. Mathematics of sensor processing.

68

Prewitt operator

• The gradient is estimated in eight possible directions, and the convolution result of greatest magnitude indicates the gradient direction..

Page 68: 1 Introduction to Robotics 4. Mathematics of sensor processing.

69

Compare

Page 69: 1 Introduction to Robotics 4. Mathematics of sensor processing.

70

Sobel Operator

• Used as a simple detector of horizontality and verticality of edges in which case only masks h1 and h3 are used.

Page 70: 1 Introduction to Robotics 4. Mathematics of sensor processing.

71

Edge – Sobel Operator

• If the h1 response is y and the h3 response x,

we might then derive edge strength (magnitude) as

• and direction as arctan (y/x).

Page 71: 1 Introduction to Robotics 4. Mathematics of sensor processing.

72

Compare

Page 72: 1 Introduction to Robotics 4. Mathematics of sensor processing.

73

Edge Operators

• Robinson operator

• Kirsch operator

Page 73: 1 Introduction to Robotics 4. Mathematics of sensor processing.

74

Zero crossings of the second derivative

• The main disadvantage of these edge detectors is their dependence on the size of objects and sensitivity to noise.

• An edge detection technique, based on the zero crossings of the second derivative explores the fact that a step edge corresponds to an abrupt change in the image function.

• The first derivative of the image function should have an extreme at the position corresponding to the edge in the image, and so the second derivative should be zero at the same position.

Page 74: 1 Introduction to Robotics 4. Mathematics of sensor processing.

75

Edge Sharpening.

One way of accomplishing detection of faint edges while maintaining precision at strong edges is to require that the second derivative be near zero, while the first derivative be above some threshold.

The Laplacian operator approximates the second derivative of the image function:

Identifying the inflection point in the intensity function.

Page 75: 1 Introduction to Robotics 4. Mathematics of sensor processing.

76

Laplacian of the Gaussian (LoG)

• Robust calculation of the 2nd derivative: – smooth an image first (to reduce noise) and

then compute second derivatives. – the 2D Gaussian smoothing operator G(x,y)

Page 76: 1 Introduction to Robotics 4. Mathematics of sensor processing.

77

LoG

– After returning to the original co-ordinates x, y and introducing a normalizing multiplicative coefficient c (that includes ), we get a convolution mask of a zero crossing detector

– where c normalizes the sum of mask elements to zero.

2σ1

Page 77: 1 Introduction to Robotics 4. Mathematics of sensor processing.

78

LoG

Roberts

Prewitt

Sobel

Page 78: 1 Introduction to Robotics 4. Mathematics of sensor processing.

80

Segmentation.

• try to find objects among all those edges.

• Segmentation is the process of dividing up or organizing the image into parts that correspond to continuous objects.

• how do we know which lines correspond to which objects?

– Model based vision: store models of line drawings of objects and then compare with all possible combinations of edges (many possible angles, different scale, …)

– Motion vision: compare two consecutive images, while moving the camera.

Each continuous object will move.

The brightness of any object will be conserved.

Subtract the images.

Page 79: 1 Introduction to Robotics 4. Mathematics of sensor processing.

81

Segmentation.

– Stereo based model : like motion vision, but use the disparity.

– Texture: areas that have uniform texture are consistent, and have almost identical brightness, so we can assume they come from the same object.

– Use shading and contours.

• All these methods have been studied extensively and come out to be a very difficult task.

• Alternatively we can do object recognition. Simplify by:

– use color.

– use small image plan.

– use simpler sensors than vision, or combine information from different sensors: sensor fusion.

– use information about the task: Active perception paradigm.

•how do we know which lines correspond to which objects?