Computer Vision - Color

47
Computer Vision - Color Hanyang University Jong-Il Park

description

Computer Vision - Color. Hanyang University Jong-Il Park. Topics to be covered. Light and Color Color Representation Color Discrimination Application. The visible light spectrum. We “see” electromagnetic radiation in a range of wavelengths. Relative sizes. Light spectrum. - PowerPoint PPT Presentation

Transcript of Computer Vision - Color

Page 1: Computer Vision - Color

Computer Vision -Color

Hanyang University

Jong-Il Park

Page 2: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Topics to be covered

Light and Color

Color Representation

Color Discrimination

Application

Page 3: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

The visible light spectrum

We “see” electromagnetic radiation in a range of wavelengths

Page 4: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Relative sizes

Page 5: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Light spectrum The appearance of light depends on its power spectrum

How much power (or energy) at each wavelength

daylight tungsten bulb Our visual system converts a light spectrum into “color”

This is a rather complex transformation

Page 6: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

The human visual system

Color perception Light hits the retina, which contains photosensitive cells

rods and cones These cells convert the spectrum into a few discrete

values

Page 7: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Density of rods and cones

Rods and cones are non-uniformly distributed on the retina Rods responsible for intensity, cones responsible for color Fovea - Small region (1 or 2°) at the center of the visual field

containing the highest density of cones (and no rods). Less visual acuity in the periphery—many rods wired to the same neuron

light

ConeRodRetina

Page 8: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

8

Rods: Twilight Vision

130 million rod cells per eye.

1000 times more sensitive to light than cone cells.

Most to green light (about 550-555 nm), but with a broad range of response throughout the visible spectrum.

Produces relatively blurred images, and in shades of gray.

Pure rod vision is also called twilight vision.

Relative neural response of rods as a function of light wavelength.

400 500 600 700Wavelength (nm)

1.00

0.75

0.50

0.25

0.00Re

lativ

e re

spon

se

Page 9: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

9

Cones: Color Vision 7 million cone cells per eye.

Three types of cones* (S, M, L), each "tuned" to different maximum responses at:-

S : 430 nm (blue) (2%)

M: 535 nm (green) (33%)

L : 590 nm (red) (65%)

Produces sharp, color images.

Pure cone vision is called photopic or color vision.

Spectral absorption of light by the three cone types

400 500 600 700Wavelength (nm)

1.00

0.75

0.50

0.25

0.00

Rela

tive

abso

rbtio

n

S M L

*S = Short wavelength cone M = Medium wavelength cone L = Long wavelength cone

Page 10: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Color perception

Three types of cones Each is sensitive in a different region of the spectrum Different sensitivities: we are more sensitive to green than red

varies from person to person (and with age) Colorblindness—deficiency in at least one type of cone

L response curve

Page 11: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Color perception

Rods and cones act as filters on the spectrum To get the output of a filter, multiply its response curve by the

spectrum, integrate over all wavelengths Each cone yields one number

Q: How can we represent an entire spectrum with 3 numbers?

S

M L

Wavelength

Power

A: We can’t! Most of the information is lost. As a result, two different spectra may appear indistinguishable

• such spectra are known as metamers• http://www.cs.brown.edu/exploratories/freeSoftware/repository/edu/brown/cs/expl

oratories/applets/spectrum/metamers_guide.html

Page 12: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Eye Color Sensitivity

Although cone response is similar for the L, M, and S cones, the number of the different types of cones vary.

L:M:S = 40:20:1 Cone responses typically

overlap for any given stimulus, especially for the M-L cones.

The human eye is most sensitive to green light.

Spectral absorption of light by the three cone types

400 500 600 700Wavelength (nm)

1.00

0.75

0.50

0.25

0.00

Rela

tive

abso

rbtio

n

S M L

S, M, and L cone distribution in the fovea

Effective sensitivity of cones (log plot)

400 500 600 700Wavelength (nm)

1.00

0.1

0.01

0.001

0.0001

Rela

tive

sens

itivi

ty

SM L

Page 13: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Theory of Trichromatic Vision The principle that the color

you see depends on signals from the three types of cones (L, M, S).

The principle that visible color can be mapped in terms of the three colors (R, G, B) is called trichromacy.

The three numbers used to represent the different intensities of red, green, and blue needed are called tristimulus values.

=

Tristimulus values

r g b

Page 14: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Seeing Colors The colors we perceive

depends on:-Illumination

source

Illumination sourceObject

reflectancefactor

Object reflectance

Observerspectral

sensitivity

Observer response

Observerresponse

=

Tristimulus values(Viewer response)

r g b

x

x

The product of these three factors will produce the sensation of color.

Page 15: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Additive Colors

Start with Black – absence of any colors. The more colors added, the brighter it gets.

Color formation by the addition of Red, Green, and Blue, the three primary colors

Examples of additive color usage:- Human eye Lighting Color monitors Color video cameras Additive color wheel

Page 16: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Subtractive Colors Starts with a white background

(usually paper).

Use Cyan, Magenta, and/or Yellow dyes to subtract from light reflected by paper, to produce all colors.

Examples of Subtractive color use:- Color printers Paints Subtractive color wheel

Page 17: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Using Subtractive Colors on Film Color absorbing pigments are layered on each other. As white light passes through each layer, different

wavelengths are absorbed. The resulting color is produced by subtracting

unwanted colors from white.

White light

Pigment layers

Reflecting layer (white paper)

M

YC

B R

G

K

W

Green Red Blue Black White

CyanYellow Magenta Cyan

MagentaYellowBlack

Page 18: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

380 480 580 680 780Wavelength (nm)

0

9

Rel

ativ

e po

wer

The dashed line represents daylight reflected from sunflower, while the solid line represents the light emitted from the color monitor adjusted to match the color of the sunflower.

Metamerism Spectrally different lights

that simulate cones identically appear identical.

Such colors are called color metamers.

This phenomena is called metamerism.

Almost all the colors that we see on computer monitors are metamers.

Page 19: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

The Mechanics of Metamerism Under trichromacy, any color

stimulus can be matched by a mixture of three primary stimuli.

Metamers are colors having the same tristimulus values R, G, and B; they will match color stimulus C and will appear to be the same color.

Wavelength (nm)780380 480 580 680

0

9

Rel

ativ

e po

wer

The two metamers look the same because they have similar tristimulus values.

Wavelength (nm)780380 480 580 680

0

9

Rel

ativ

e po

wer

Wavelength (nm)780380 480 580 680

0

9

Rel

ativ

e po

wer

780

380

780

380

780

380

R S r d

G S g d

B S b d

Page 20: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Gamut

A gamut is the range of colors that a device can render, or detect.

The larger the gamut, the more colors can be rendered or detected.

A large gamut implies a large color space.

00

0.2 0.4 0.6 0.8

0.2

0.4

0.6

0.8

x

y

Human vision gamut

Monitor gamut

Photographic film gamut

Page 21: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Color Spaces A Color Space is a method by which colors are

specified, created, and visualized.

Colors are usually specified by using three attributes, or coordinates, which represent its position within a specific color space.

These coordinates do not tell us what the color looks like, only where it is located within a particular color space.

Color models are 3D coordinate systems, and a subspace within that system, where each color is represented by a single point.

Page 22: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Color Spaces Color Spaces are often geared towards specific

applications or hardware.

Several types:- HSI (Hue, Saturation, Intensity) based RGB (Red, Green, Blue) based CMY(K) (Cyan, Magenta, Yellow, Black) based CIE based Luminance - Chrominance based

CIE: International Commission on Illumination

Page 23: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

RGB* One of the simplest color models.

Cartesian coordinates for each color; an axis is each assigned to the three primary colors red (R), green (G), and blue (B).

Corresponds to the principles of additive colors.

Other colors are represented as an additive mix of R, G, and B.

Ideal for use in computers.

*Red, Green, and Blue

Black(0,0,0)

Cyan(0,1,1)

Green(0,1,0)

Yellow(1,1,0)

Red(1,0,0)

Magenta(1,0,1)

Blue(0,0,1)

White(1,1,1)

RGB Color Space

Page 24: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

RGB Image Data

Red Channel

Green Channel

Full Color Image

Blue Channel

Page 25: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CMY(K)* Main color model used in the

printing industry. Related to RGB.

Corresponds to the principle of subtractive colors, using the three secondary colors Cyan, Magenta, and Yellow.

Theoretically, a uniform mix of cyan, magenta, and yellow produces black (center of picture). In practice, the result is usually a dirty brown-gray tone. So black is often used as a fourth color.

*Cyan, Magenta, Yellow, (and blacK)

Magenta

YellowCyan

Blue Red

Green

Black

White

Producing other colors from subtractive colors.

Page 26: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CMY Image Data

Full Color Image Cyan Image (1-R)

Magenta Image (1-G) Yellow Image (1-B)

Page 27: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CMY – RBG Transformation

The following matrices will perform transformations between RGB and CMY color spaces.

Note that:- R = Red G = Green B = Blue C = Cyan M = Magenta Y = Yellow All values for R, G, B

and C, M, Y must firstbe normalized.

111

C RM GY B

111

R CG MB Y

Page 28: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

HSI / HSL / HSV* Very similar to the way human visions see color.

Works well for natural illumination, where hue changes with brightness.

Used in machine color vision to identify the color of different objects.

Image processing applications like histogram operations, intensity transformations, and convolutions operate on only an image's intensity and are performed much easier on an image in the HSI color space.

*H=Hue, S = Saturation, I (Intensity) = B (Brightness), L = Lightness, V = Value

Page 29: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

HSI Color Space Hue

What we describe as the color of the object.

Hues based on RGB color space. The hue of a color is defined by

its counterclockwise angle from Red (0°); e.g. Green = 120 °, Blue = 240 °.

RGB Color Space

RGB cube viewed fromgray-scale axis

RGB cube viewed from

gray-scale axis, and rotated 30°

HSI Color Wheel

Red 0º

Green

120º

Blue 240º

Saturation Degree to which hue differs from

neutral gray. 100% = Fully saturated, high

contrast between other colors. 0% = Shade of gray, low contrast. Measured radially from intensity

axis.

0% Saturation 100%

Page 30: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

HSI Color Space Intensity

Brightness of each Hue, defined by its height along the vertical axis.

Max saturation at 50% Intensity. As Intensity increases or decreases

from 50%, Saturation decreases. Mimics the eye response in nature;

As things become brighter they look more pastel until they become washed out.

Pure white at 100% Intensity. Hue and Saturation undefined.

Pure black at 0% Intensity. Hue and Saturation undefined.

Hue

Saturation 0%100%

Intensity 100%

0%

Page 31: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

HSI Image Data

Hue Channel

Saturation Channel Intensity Channel

Full Image

Page 32: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CIE L*a*b* Color Space / CIELAB Second of two systems adopted by CIE in

1976 as models that better showed uniform color spacing in their values.

Based on the earlier (1942) color opposition system by Richard Hunter called L, a, b.

Very important for desktop color.

Basic color model in Adobe PostScript (level 2 and level 3)

Used for color management as the device independent model of the ICC* device profiles.

CIE L*a*b* color axes

*International Color Consortium

Page 33: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CIE L*a*b* (cont’d) Central vertical axis : Lightness (L*),

runs from 0 (black) to 100 (white).

a-a' axis: +a values indicate amounts of red, -a values indicate amounts of green.

b-b' axis, +b indicates amounts of yellow; -b values indicates amounts of blue. For both axes, zero is neutral gray.

Only values for two color axes (a*, b*) and the lightness or grayscale axis (L*) are required to specify a color.

CIELAB Color difference, E*ab, is between two points is given by:

+a

-a

-b

+b

100

0

L*

CIE L*a*b* color axes

(L1*, a1*, b1*)

(L2*, a2*, b2*)

2 2 2* ( *) ( *) ( *)abE L a b

Page 34: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

CIELAB Image Data

Full Color Image L data

L-a channel L-b channel

Page 35: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

SceneRadiance L Lens Image

Irradiance E

CameraElectronics

Scene

ImageIrradiance E

Measured Pixel Values, I

Non-linear Mapping!

Linear Mapping!

• Before light hits the image plane:

• After light hits the image plane:

Can we go from measured pixel value, I, to scene radiance, L?

Relationship between Scene and Image Brightness

Page 36: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Demosaicking

Cf. 3CCD camera

Page 37: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

The camera response function relates image irradiance at the image plane to the measured pixel intensity values.

CameraElectronics

ImageIrradiance E

Measured Pixel Values, I

IEg :

(Grossberg and Nayar)

Relation between Pixel Values I and Image Irradiance E

Page 38: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

• Important preprocessing step for many vision and graphics algorithms such as photometric stereo, invariants, de-weathering, inverse rendering, image based rendering, etc.

EIg :1

• Use a color chart with precisely known reflectances.

Irradiance = const * ReflectanceP

ixel

Val

ues

3.1%9.0%19.8%36.2%59.1%90%

• Use more camera exposures to fill up the curve.• Method assumes constant lighting on all patches and works best when source

is far away (example sunlight).

• Unique inverse exists because g is monotonic and smooth for all cameras.

0

255

0 1

g

?

?1g

Radiometric Calibration

Page 39: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Dynamic Range

Page 40: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

• Dynamic Range: Range of brightness values measurable with a camera

(Hood 1986)

High Exposure Image Low Exposure Image

• We need 5-10 million values to store all brightnesses around us.• But, typical 8-bit cameras provide only 256 values!!

• Today’s Cameras: Limited Dynamic Range

The Problem of Dynamic Range

Page 41: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

High dynamic range imaging

Techniques Debevec: http://www.debevec.org/Research/HDR/ Columbia:

http://www.cs.columbia.edu/CAVE/tomoo/RRHomePage/rrgallery.html

Page 42: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Color Discrimination

Active approach Using controlled lights

Passive approach Using optical filters

camera

LED Cluster

controller

scene

illumination 1 Illumination 2

Page 43: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Visual effect of illumination

400

415

430

445

460

475

490

505

520

535

550

565

580

595

610

625

640

655

670

685

700

Cancer

Normal

Spec

tral R

efle

ctan

ce

400

415

430

445

460

475

490

505

520

535

550

565

580

595

610

625

640

655

670

685

700

Wavelength(nm)

Spec

tral S

ensit

ivity

Camera Blue

Channel

Camera Green

Channel

Camera Red

Channel

400

415

430

445

460

475

490

505

520

535

550

565

580

595

610

625

640

655

670

685

700

Spec

tral P

ower

Synthetic Illumination LA

400

415

430

445

460

475

490

505

520

535

550

565

580

595

610

625

640

655

670

685

700

Spec

tral P

ower

Halogen Lamp

RGB Distance: 115.86

RGB Distance: 98.12RGB Distance: 92.85

400

410

420

430

440

450

460

470

480

490

500

510

520

530

540

550

560

570

580

590

600

610

620

630

640

650

660

670

680

690

700

Spec

tral P

ower

Xenon Lamp

Page 44: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Optimal illumination

Page 45: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Imaging for Autonomous Vehicle

For traffic lights Passive approach

Using optimized color filters

For pedestrian detection Multispectral/hyperspectral imaging

Infrared band

Page 46: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Segmentation Keying

Interactive segmentation

[ 서울대 ]

Page 47: Computer Vision - Color

            

Department of Computer Science and Engineering, Hanyang University

Virtual Studio

• NHK STRL: Synthevision, VS, DTPP (1989~1992)

VS Overview paper: S.Gibbs et al.(1996)