Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew...

35
Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering (2005)
  • date post

    20-Dec-2015
  • Category

    Documents

  • view

    222
  • download

    0

Transcript of Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew...

Page 1: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Colorization by Example

R. Irony, D. Cohen-Or, D. LischinskiTel-Aviv UniversityThe Hebrew University of Jerusalem

Eurgraphics Symposium on Rendering (2005)

Page 2: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Introduction Levin et al. [LLW04] recently proposed a

simple yet effective user-guided colorization method.

In this method the user is required to scribble the desired colors in the interiors of the various regions.

Automatically propagates the scribbled colors to produce a completely colorized image.

Page 3: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Introduction

Levin et al.’s colorization.Left: Given a grayscale image marked with some color scribbles by the user.Middle: Levin et al’s colorization produces a colorized image.Right: For reference, the original color image is shown.

Page 4: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Introduction

Levin et al.’s colorization.Left: dozens of user drawn scribbles (some very small). Right: resulting colorization.This method might require the user to carefully place a multitude of appropriately colored scribbles.

Page 5: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Introduction Welsh et al. [WAM02] proposed an automatic

colorization technique that colorizes an image by matching small pixel neighborhoods in the image to those in the reference image, and transferring colors accordingly.

Page 6: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Supervised Classification Supervised classification methods typically

consist of two phases: • feature analysis

• classification

In this paper we adopt a classification approach based on the K-nearest-neighbor (Knn) rule [DHS00].

This is an extremely simple yet effective method for supervised classification.

Page 7: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Supervised Classification Linear dimensionality reduction techniques are often

used to make such classifiers both more efficient and more effective.

For example, PCA-based techniques apply a linear projection that reduces the dimension of the data while maximizing the scatter of all projected samples.

Linear Discriminant Analysis (LDA) [BHK97, DHS00, Fis36], which finds a linear subspace in which the ratio of between-class scatter to that of within-class scatter is maximized.

Page 8: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Our approach vs. color transfer.

Page 9: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Colorization by Example Our algorithm colorizes one or more input

grayscale images, based on a partially segmented reference color image.

By partial segmentation we mean that one or more mutually disjoint regions in the image and each region has been assigned a unique label.

Such segmentations may be either computed automatically, or marked manually by the user.

Page 10: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Colorization by Example This approach consists of the following main

conceptual stages:• Training

• Classification

• Color transfer

• Optimization

Page 11: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Overview of our colorization technique

Page 12: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Training

The luminance channel of the reference image along with the accompanying partial segmentation are provided as a training set to supervised learning algorithm.

This algorithm constructs a low-dimensional feature space in which it is easy to discriminate between pixels belonging to differently labeled regions.

Page 13: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Feature Spaces and Classifier Given the reference color image and its partial

segmentation, our first task is to construct a feature space and a corresponding classifier.

Every pixel in one of these regions defines a labeled feature vector, a point in the feature space.

Given a previously unseen feature vector, the goal of the classifier is to decide which label should be assigned to it.

Page 14: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Feature Spaces and Classifier

The classifier must be able to distinguish between different classes mainly based on texture.

We should associate each pixel with a feature vector representing its monochromatic texture.

We use the Discrete Cosine Transform (DCT) coefficients of a k by k neighborhood around the pixel as its feature vector.

Page 15: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Feature Spaces and Classifier The DCT transform yields a k2-dimensional

feature space, populated with the labeled feature vectors corresponding to the training set pixels.

A novel feature vector may be naively classified by assigning it the label of its nearest feature space neighbor.

A sophisticated classifier is defined by the K-nearest-neighbor (Knn) rule.

Page 16: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Feature Spaces and Classifier

This classifier examines the K nearest neighbors of the feature vector and chooses the label by a majority vote.

Better results may be obtained by switching to a low-dimensional subspace, custom-tailored according to the training set, using an approach similar to linear discriminant analysis (LDA).

Page 17: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Applying Knn in a discriminating subspace:The feature space is populated by points belonging to two classes: magenta and cyan. The yellow highlighted point has a majority of magenta-colored nearest neighbors.

After rotating the space to the UV coordinate systemV is the principle direction of the intra-difference vectors, and then projecting the points onto the U axis, all of the nearest neighbors are cyan.

Page 18: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Simple Knn-matching based on similar luminance value and neighborhood statistics (e) vs. our matching (g).

Page 19: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Feature Spaces and Classifier

A transformation T which transforms the vector of k2 DCT coefficients to a point in the low-dimensional subspace.

The distance between pixels p and q :

• f (x) is the vector of DCT coefficients corresponding to the k×k neighborhood centered at x.

Page 20: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Although the Knn classifier described above is more robust than a naive nearest-neighbor classifier, there can still be many misclassified pixels.Quite a few pixels inside the body of the cheetah are classified as belonging to the background, and vice versa.

Page 21: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Image space Voting N(p), the k×k neighborhood around a

pixel p in the input image. We replace the label of p with the

dominant label in N(p). The dominant label is the label with the

highest confidence .

Page 22: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Image space Voting

Wq depend on the distance D(q,Mq), between the pixel q and its best match Mq. Mq is the nearest neighbor of q in the feature space, which has the same label as q.

is the set of pixels in N(p) with the label

Page 23: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

(c) Shows how this image space voting improves the spatial coherence of the resulting classification

Page 24: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Color Transfer and Optimization

We work in the YUV color space, where Y is the monochromatic luminance channel, which we use to perform the classification, while U and V are the chrominance channels.

C(p) denote the chrominance coordinates of a pixel p.

The color of p (with label ) is given by the weighted average C(p)

Page 25: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Color Transfer and Optimization

Mq(p) denotes the pixel in example image whose position with respect to Mq is the same as the position of p with respect to q.

Assigning color to pixel p: each neighbor of p (e.g., q, r) has a matching neighborhood in the reference image, which “predicts” a different color for p. The color of p is a weighted average of these predictions.

Page 26: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

We transfer color only to pixels whose confidence in their label is sufficiently large, aaa > 0.5 , and provide the colored pixels as constraints to the optimization-based color interpolation scheme of Levin et al. [LLW04].

The optimization-based interpolation is based on the principle that neighboring pixels with similar luminance should have similar colors.

Color Transfer and Optimization

Page 27: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

The interpolation attempts to minimize the difference between the color assigned to a pixel p and the weighted average of the colors of its neighbors.

Color Transfer and Optimization

Wpq is a weighting function that sums to one

Page 28: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Generating automatic scribbles: pixels with confidenceabove a predefined threshold are provided as input tothe optimization stage.

Page 29: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Closeup of the head: (a)before optimization(b)confidence map(c) final result after optimization.

Page 30: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Result We compare our results with the results

achieved by automatic pixelwise color transfer [WAM02].

We further show that naive classifiers alone do not yield satisfactory results and that our feature space analysis and image space voting greatly improve classification quality.

Page 31: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Result

Page 32: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Result We also compare our method to the user-

assisted method of Levin et al. [LLW04] The advantage of their technique is that it does

not require a reference image. It is sometimes sufficient to manually colorize a

small part of the image using Levin’s method, and then complete the colorization automatically, using that small part as the reference.

Page 33: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Result A significant advantage of our method

lies in its ability to automatically colorize an entire series of grayscale images.

Page 34: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Result

Colorization of a series of grayscale images from a single example. Left column: the reference image, its collection of regions, and its luminance channel.On the right:Top row : the input grayscale images; Middle row : corresponding classifications; Bottom row : the resulting colorizations.

Page 35: Colorization by Example R. Irony, D. Cohen-Or, D. Lischinski Tel-Aviv University The Hebrew University of Jerusalem Eurgraphics Symposium on Rendering.

Future Work In the future we plan to investigate the possibility of

combining our classification-based approach with automatically established swatches.

This may be done by searching for representative regions in the input images that match reference regions particularly well.

We would also like to explore other, more sophisticated, monochrome texture descriptors.