Computing Nearest-Neighbor Fields via Propagation-Assisted KD...

1
Computing Nearest-Neighbor Fields via Propagation-Assisted KD-Trees Kaiming He and Jian Sun Microsoft Research Asia Introduction Results Computing Nearest-Neighbor Fields (NNF) is to densely match patches between two images. Overview Source Image A Target Image B Nearest-Neighbor Field The applications of NNF: image inpainting/retargeting [1], texture synthesis, super-resolution, denoising, etc. Contributions: a fast and accurate method for computing NNF. [1] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. Patchmatch: a randomized correspondence algorithm for structural image editing. In SIGGRAPH, 2009 [2] S. Korman and S. Avidan. Coherency sensitive hashing. In ICCV, 2011. Our perspective: ANN search for multiple but dependent queries. Patches in A Query Set Patches in B Candidate Set Methodology comparisons: Method Strategy in Query Set Strategy in Candidate Set Traditional LSH (Locality Sensitive Hashing) None Data-independent Traditional kd-tree None Data-adaptive PatchMatch [1] Propagation Random Sampling CSH [2] (Coherency Sensitive Hashing) Propagation Data-independent (LSH) Ours Propagation Data-adaptive (kd-tree) Our advantage: better utilizes both the dependency of the patches in A and the dependency of the patches in B. Algorithm Basic Algorithm 1. Build a kd-tree using all the patches in the Candidate Set. Each leaf contains no more than m candidates. 2. Scan the image A in raster order. For each patch p A (x, y) in A, do propagation-assisted kd-tree search as follows: i. Descend the tree to a leaf (Leaf #0); ii. Propagate a leaf from left/upper, using the already matched result of p A (x-1, y) and p A (x, y-1); iii. Find the NN of p A (x, y) in all the above leaves. image B image A Propagation in PatchMatch [1] (x-1, y) (x, y) (x’-1, y’) (x’, y’) image B image A Leaf #1 Propagation-Assisted KD-tree (x-1, y) (x, y) (x’-1, y’) (x’, y’) Method Comparison: PatchMatch [1] Performance in VidPairs Benchmark [2] (Accuracy vs. Time, 133 image pairs) References 40 45 50 55 60 65 0 5 10 15 20 25 30 35 40 45 50 L 2 dist. seconds PM CSH Ours (no re-ranking) Ours (re-ranking) Traditional kd-tree Ground Truth 2Mp images, 8x8 patches Observations: 1. 10-20x faster vs. PatchMatch [1], 2-5x faster vs. CSH [2], at the same accuracy. 2. 70% smaller error vs. PatchMatch, 50% smaller error vs. CSH, at the same running time. 3. Traditional kd-trees can be comparable with PatchMatch. Visual Comparisons Image B PM CSH GT Ours Image A PM’s NNF CSH’s NNF Our NNF Groundtruth PM’s error CSH’s error Our error NNFs and error maps CSH GT Ours PM PM CSH GT Ours PM CSH GT Ours Reconstructed Images Method Comparisons: kd-tree (a) Standard search (err: 11.13) e 0 e 1 120 0 120 (b) Priority search (err: 10.71) e 0 e 1 120 0 120 (d) Propagation from GT (err: 2.77) e 0 e 1 120 0 120 (c) Randomized kd-trees (err: 7.80) e 0 e 1 120 0 120 (e) Our propagation search (err: 3.04) e 0 e 1 120 0 120 e 0 : error of Leaf #0; e 1 : error of Leaf #1. Leaf #1 is given by backtracking (standard/priority search), another random tree (randomized kd-tree), or propagation (ours).

Transcript of Computing Nearest-Neighbor Fields via Propagation-Assisted KD...

Page 1: Computing Nearest-Neighbor Fields via Propagation-Assisted KD …kaiminghe.com/cvpr12/cvpr12nnf-poster.pdf · 2017-01-22 · Computing Nearest-Neighbor Fields via Propagation-Assisted

Computing Nearest-Neighbor Fields via Propagation-Assisted KD-Trees Kaiming He and Jian Sun

Microsoft Research Asia

Introduction Results

• Computing Nearest-Neighbor Fields (NNF) is to densely match patches between two images.

Overview

Source Image A Target Image B

Nearest-Neighbor Field

• The applications of NNF: image inpainting/retargeting [1], texture synthesis, super-resolution, denoising, etc.

• Contributions: a fast and accurate method for computing NNF.

[1] C. Barnes, E. Shechtman, A. Finkelstein, and D. B. Goldman. Patchmatch: a randomized correspondence algorithm for structural image editing. In SIGGRAPH, 2009 [2] S. Korman and S. Avidan. Coherency sensitive hashing. In ICCV, 2011.

• Our perspective: ANN search for multiple but dependent queries.

Patches in A

Query Set

Patches in B

Candidate Set

• Methodology comparisons:

Method Strategy in Query Set

Strategy in Candidate Set

Traditional LSH (Locality Sensitive Hashing)

None Data-independent

Traditional kd-tree None Data-adaptive

PatchMatch [1] Propagation Random Sampling

CSH [2] (Coherency Sensitive Hashing)

Propagation Data-independent (LSH)

Ours Propagation Data-adaptive (kd-tree)

• Our advantage: better utilizes both the dependency of the patches in A and the dependency of the patches in B.

Algorithm

• Basic Algorithm

1. Build a kd-tree using all the patches in the Candidate Set. Each leaf contains no more than m candidates.

2. Scan the image A in raster order. For each patch pA(x, y) in A, do propagation-assisted kd-tree search as follows:

i. Descend the tree to a leaf (Leaf #0); ii. Propagate a leaf from left/upper, using the already matched

result of pA(x-1, y) and pA(x, y-1); iii. Find the NN of pA(x, y) in all the above leaves.

image Bimage A

Propagation in PatchMatch [1]

(x-1, y)(x, y)

(x’-1, y’)(x’, y’)

image Bimage A

Leaf #1

Propagation-Assisted KD-tree

(x-1, y)(x, y)

(x’-1, y’)(x’, y’)

• Method Comparison: PatchMatch [1]

• Performance in VidPairs Benchmark [2] (Accuracy vs. Time, 133 image pairs)

References

40

45

50

55

60

65

0 5 10 15 20 25 30 35 40 45 50

L 2di

st.

seconds

PMCSHOurs (no re-ranking)Ours (re-ranking)Traditional kd-treeGround Truth

2Mp images, 8x8 patches

Observations: 1. 10-20x faster vs. PatchMatch [1], 2-5x faster vs. CSH [2], at the same accuracy. 2. 70% smaller error vs. PatchMatch, 50% smaller error vs. CSH, at the same running time. 3. Traditional kd-trees can be comparable with PatchMatch.

• Visual Comparisons

Image B

PM CSH GTOurs

Image A

PM’s NNF

CSH’s NNF

Our NNF

Groundtruth

PM’s error

CSH’s error

Our error

NNFs and error maps

CSH GTOursPM

PM CSH GTOurs

PM CSH GTOurs

Reconstructed Images

• Method Comparisons: kd-tree

(a) Standard search(err: 11.13)

e0

e1

1200

120

(b) Priority search(err: 10.71)

e0

e1

1200

120

(d) Propagation from GT(err: 2.77)

e0

e1

1200

120

(c) Randomized kd-trees(err: 7.80)

e0

e1

1200

120

(e) Our propagation search(err: 3.04)

e0

e1

1200

120

e0: error of Leaf #0; e1: error of Leaf #1. Leaf #1 is given by backtracking (standard/priority search), another random tree (randomized kd-tree), or propagation (ours).