Geometric Modeling Summer Semester 2012
Transcript of Geometric Modeling Summer Semester 2012
Geometric Modeling Summer Semester 2012
Dimensionality Reduction, Intrinsic Geometry and Discrete
Differential Operators
Dimensionality Reduction
Fitting Affine Subspaces
The following problem can be solved exactly:
β’ Best fitting line to a set of 2D, 3D points
β’ Best fitting plane to a set of 3D points
β’ In general: Affine subspace of βπ, with dimension π β€π that best approximates a set of data points π±π β βπ.
This will lead to the - famous - principle component analysis (PCA).
Start: 0-dim Subspaces
Easy Start: The optimal 0-dimensional affine subspace
β’ Given a set π of π data points π±π β βπ, what is the point π±0 with minimum least square error to all data points?
β’ Answer: just the sample mean (average)...:
x0
π±0 = m π β 1
π π±π
π
π=1
One Dimensional Subspaces...
Next:
β’ What is the optimal line (1D subspace) that approximates a set of data points π?
β’ Two questions:
Optimum origin (point on the line)?
β This is still the average
Optimum direction?
β We will look at that next...
β’ Parametric line equation:
x0 xi
r
π± π‘ = π±0 + π‘. π« π±0 β βπ, π« β βπ, π« = 1
Best Fitting Line
Result:
Eigenvalue Problem:
β’ π«Tππ« is a Rayleigh quotient
β’ Minimizing the energy: maximum quotient
β’ Solution: eigenvector with largest eigenvalue
πππ π‘ ππππ, π±π 2= βπ«πππ«
π
π=1+ const.
with π = π±πβπ±0 π±πβπ±0 T, π« = 1π
π=1
General Case
Fitting a π-dimensional affine subspace:
β’ π = 1 : line
β’ π = 2 : plane
β’ π = 3 : 3D subspace
β’ ...
Simple rule:
β’ Use the π eigenvectors with largest eigenvalues from the spectrum of S.
β’ Gives the (total) least-squares optimal subspace that approximates the data set π.
PCA Maximum Variance Formulation
Alternate Formulation:
β’ Let π― β βπ be the 1D subspace (with π―ππ― = 1), that maximize the variance of data π β βπ.
β’ Each data point π±πis projected onto a scalar value π―ππ±π.
β’ The mean of projected data is π―ππ±0.
β’ The variance of the projected data is given by: 1
π π―ππ±π β π―ππ±0
2ππ=1 = π―πSπ― ;
β’ The problem now reduces to
π―β = πππ πππ { π―πSπ― + π 1 β π―ππ― }
β’ Solution:
Eigenvector of S with largest eigenvalue π1.
π =1
π π±πβπ±0 π±πβπ±0
Tπ
π=1
Statistical Interpretation
Observation:
β’π
πβππ is the covariance matrix of the data π.
β’ PCA can be interpreted as fitting a Gaussian distribution and computing the main axes.
x0
11 v
12 v
)(, xΞΌΞ£N
π = π±0 =1
π π±π
π
π=1
πΊ = 1
π β 1π =
1
π β 1 π±πβπ±0 π±πβπ±0
T
π
π=1
ππΊ,π π± =1
2ππ2 det πΊ 1/2
exp β1
2 π± β π TπΊβ1 π± β π
Applications
Fitting a line to a point cloud in βπ:
β’ Sample mean and direction of maximum eigenvalue
Plane Fitting in βπ:
β’ Sample mean and the two directions of maximum eigenvalues
β’ Smallest eigenvalue
Eigenvector points in normal direction
Aspect ratio (3 / 2) is a measure of βflatnessβ (quality of fit)
x0
(2 / 1) small
(2 / 1) larger
π± π‘ = π±0 + π‘. π―π
Applications
Application: Normal estimation in point clouds β’ Given a set of points π β βπ sampled from a smooth
surface.
β’ We want to estimate Surface Normals.
Algorithm: β’ For each point, compute the π-nearest neighbors (π = 20 .
β’ Compute a PCA (average, main axes) of these points.
β’ Eigenvector with smallest eigenvalue normal direction.
β’ The other two eigenvectors tangent vectors.
Example
points normals
tangential frames
Dimensionality Reduction
Notations: β’ π = π±1, β¦ , π±π , β¦ , π±π , π±π β βπ
β’ π is a π Γπ orthogonal matrix with πππ = ππ.
Projection: from βπonto βπ removes πβ π rows of ππ to obtain ππ
β’ π = πππ with πππ = ππ .
Reconstruction: of βπ from βπ removes πβ π columns of π to obtain π
β’ π = ππ with πππ = ππ .
det π = +1 implies a Rotation matrix.
Idea: Projection of higher dimensional data to a lower dimensional
subspace.
β’ π β βπ π β βπ where
πππ
ππ
β’ Linear Dimensionality Reduction using PCA computes π = πππ .
Dimensionality Reduction
πβ²1
πβ²2
πβ²π πβ²π
Projection
π1
π2
ππ ππ
π βͺ π
Reconstruction
*Information is LOST
Metric Multi Dimensional Scaling
PCA uses Covariance matrix
β’ π =1
ππππ assuming centered data (i.e. zero mean).
β’ The data points are represented in a vector space.
β’ Dimension of π is π Γπ.
MDS uses Gram Matrix (dot products)
β’ π = πππ captures (dis)similarity of data points.
β’ The data points are not explicitly required.
β’ Dimension of π is π Γ π.
β’ Goal is to embed the data in π-dimensional space such that some metric is preserved.
Part III: Intrinsic Geometry and Intrinsic Mappings
Scenario
Mapping between Surfaces
β’ Intrinsic view β only metric tensor
β’ Ignore isometric deformations
β’ Applications:
Deformable shape matching
Texture mapping (flat 3D) βparametrizationβ
f S1
S2
Differential Geometry Revisited
Parametric Patches
π
π π
π
π
Parametric Patches
Function π π’, π£ β β3
β’ Canonical tangents: ππ’π π’, π£ , ππ£π π’, π£
β’ Normal: π§ π’, π£ =ππ’π π’,π£ Γππ£π π’,π£
ππ’π π’,π£ Γππ£π π’,π£
u
v
(u, v)
f(u, v) f
2 P 3
v f (u, v)
u f (u, v)
n (u, v)
Fundamental Forms
Fundamental Forms:
β’ Describe the local parameterized surface.
β’ Measure...
...distortion of length (first fundamental form)
...surface curvature (second fundamental form)
First Fundamental Form
First Fundamental Form
β’ Also known as metric tensor.
β’ It can be written as a 2 Γ 2 symmetric matrix:
π =ππ’πππ’π ππ’πππ£πππ’πππ£π ππ£πππ£π
=: πΈ πΉπΉ πΊ
β’ The matrix is symmetric and positive definite
(regular parametrization, semi-definite otherwise)
β’ Defines a generalized scalar product that measures
lengths and angles on the surface.
v
u
v f (u,v)
u f (u,v) f(u,v)
Second Fundamental Form
Second Fundamental Form
β’ Also known as shape operator or curvature tensor.
β’ It can be written as a 2 Γ 2 symmetric matrix:
ππ =ππ’π’π. π§ ππ’π£π. π§ππ’π£π. π§ ππ£π£π. π§
=: π ππ π
Eigen-analysis:
β’ Eigenvalues of second fundamental form for an orthonormal tangent basis are called principal curvatures ΞΊ1, ΞΊ2.
β’ Corresponding orthogonal eigenvectors are called principal directions of curvature.
β’ A metric at point π is a function ππ: ππβ³Γππβ³β β such
that:
ππ is Bilinear : ππ ππΆπ + ππ·π, πΈπ = πππ πΆπ, πΈπ + πππ π·π, πΈπ and
ππ πΈπ, ππΆπ + ππ·π = πππ πΈπ, πΆπ + πππ πΈπ, π·π
ππ is Symmetric: ππ πΆπ, π·π = ππ π·π, πΆπ
ππ is Non-degenerated: π·π β¦ ππ πΆπ, π·π , βπΆπ β π
How to build a metric
Tangent Space
β’ For a π-dimensional manifold β³, at each point π β β³ there exist a vector space ππβ³, called βTangent Spaceβ.
β’ It consist of all tangent vectors to manifold at point π.
ππβ³ π
How to build a metric
Inner Product Metric
β’ Two tangent vector at point π can be defined as: πΆπ = πΌ1ππ’π + πΌ2πππ and π·π = π½1ππ’π + π½2πππ
β’ The Inner product is defined as:
ππ πΆπ, π·π = πΆπ, π·π = πΌ1 πΌ2πΈ πΉ
πΉ πΊ
π½1π½2
It inherently uses the first fundamental form.
β’ ππ πΆπ, π·π is symmetric, bilinear and non-degenerated and hence a metric.
ππβ³ π
Riemannian Manifolds
Riemannian Manifold
β’ Manifold topology, π-dimensional
We mostly focus on 2-manifolds, embedded in β3
β’ Real differentiable manifold.
β’ Local parametrization: tangent space
β’ Intrinsic metric (metric tensor everywhere)
Allows to define various geometric notions on manifold:
β Angles
β Lengths of curves
β Areas (or volumes)
β Curvature
β Gradients of functions
β Divergence of vector fields standard metric
non-standard (pos-def.)
Riemannian Manifolds
Arclength:
β’ Let π π‘ = π’, π£ , in an interval [π, π].
β’ π π’ π‘ , π£ π‘ will trace out a parametric curve in π.
Arclength of parametric curve in interval [π, π]:
u
v x
y
z
t v
u
π
π
= ππ’ππΈ + 2ππ’ ππ£ πΉ + ππ£ππΊ ππ‘π
π
Riemannian Manifolds
Arclength:
β’ Let π π‘ = π’, π£ , in an interval [π, π].
β’ π π’ π‘ , π£ π‘ will trace out a parametric curve in π.
= ππ’πππ’π. ππ’π + 2ππ’ ππ£ ππ’π. ππ£π + ππ£πππ£π. ππ£π ππ‘π
π
π = π
ππ‘π π’ π‘ , π£ π‘ ππ‘
π
π
= ππ’ ππ’π + ππ£ ππ£π ππ‘
π
π
Arclength of parametric curve in interval [π, π]:
= ππ’ ππ£πΈ πΉπΉ πΊ
ππ’ππ£
ππ‘π
π
Riemannian Manifolds
Surface Area:
β’ Apply integral transformation theorem:
β’ Using Langrangeβs identity:
area π = ππ’π. ππ’π ππ£π. ππ£π β ππ’π. ππ£π π ππ’ ππ£
Ξ©
= πΈπΊ β πΉπ ππ’ ππ£Ξ©
= detπΈ πΉπΉ πΊ
ππ’ ππ£Ξ©
area π = ππ’π Γ ππ£π ππ’ ππ£Ξ©
Curvature:
β’ Principle Curvature : Eigenvalues of second fundamental form for an
orthonormal tangent basis.
β’ Normal Curvature : norm of the projection of the derivative ππ»
ππ on
normal plane π§.
β’ Mean Curvature :
β’ Gaussian Curvature :
Riemannian Manifolds
ΞΊ1, ΞΊ2
ΞΊπ = ΞΊ π§ = π§ππ ππ π
π§ with πππ₯ ΞΊ π§ = ΞΊ1 and πππ ΞΊ π§ = ΞΊ2
π» =1
2 ΞΊ1 + ΞΊ2 = det
π ππ π
πΎ = ΞΊ1ΞΊ2 =1
2tr
π ππ π
Source: Wikipedia
Riemannian Manifolds
Curvature:
β’ Geodesic Curvature : norm of the projection of the derivative ππ»
ππ on the
tangent plane. It allows to distinguish inherent curvature of the curve in the (u,v) space from the curvature induced
by mapping π in π.
β’ Total Curvature :
β’ For circles : ΞΊπ = 1.
β’ For Great circles : ΞΊπ = ΞΊπ = 1 , ΞΊg = 0 i.e. locally Flat e.g. Earth
β’ For Small circles of radius π : ΞΊπ = 1 β ππ π
ΞΊπ = ΞΊππ + ΞΊπ
π
v
u
Riemannian Manifolds
35
Geodesics:
β’ Curves on a surface which minimize length between the end points are called Geodesics.
β’ A path minimizing energy is just a geodesic parameterized by arc length.
β’ A curve on a surface with zero geodesic curvature is a geodesic.
π = ππ’ ππ£πΈ πΉπΉ πΊ
ππ’ππ£
ππ‘π
π
ππππππ¦ = ππ’ ππ£πΈ πΉπΉ πΊ
ππ’ππ£
ππ‘π
π
Types of Mappings
Given: Riemannian Manifolds β³1,β³2
Consider: Functions π:β³1 ββ³2
Important types of mappings:
β’ Isometric: preserves distances, angles and area
β’ Conformal: preserves (only) angles
β’ Equi-areal (incompressible): preserves area
Isometric Mapping
Definition
β’ An mapping π between two surface patch β³1and β³2 is an isometric mapping if it preserves distance on them.
β’ An isometric mapping is symmetric i.e. πβ1is also an isometry.
β³1
β³2
π:β³1 ββ³2
πβ1:β³2 ββ³1
Isometric Mapping
Isometric surfaces have same parametric domain
β’ If π:β³1 ββ³2 is an isometric mapping then both surfaces have same intrinsic parameterization. i.e.
π1β1 β³1 = π2
β1 β³2 = Ξ© π’, π£
π
π Ξ©
β³1 β³2
π:β³1 ββ³2
π2: Ξ© β β³2
π1: Ξ© β β³1
Isometric Mapping
Surfcap: University of Surry
Isometric Mapping
To solve Isometry β’ Either find the explicit parameterization π1 and π2 to recover the implicit domain.
β’ OR find an intrinsic mapping π of β³1 and β³2 to an isometry invariant space π.
β³1 β³2
π1
π2
ππ ππ
π:β³1 β π π:β³2 β π
π β βπ
Isometric Mapping
Property of Isometry Invariant Space β’ Geodesic distance between any pair of points in β³1 and their images in β³2
should be equivalent to Euclidean distance between their images in π.
β³1 β³2
π1
π2
ππ ππ
π:β³1 β π π:β³2 β π
π β βπ
Spectral Graph Methods for 3D Shape Analysis
Discrete Manifold Representation
π
π
π
Surface Patches in Practice β’ Most existing shape acquisition methods often yield noisy point
clouds instead of a nice parametric surface representation.
β’ Finding a parameterization of complex real world object is practically infeasible.
π
π
π
Represent surface patch with an underlying locally connected graph structure
β’ Distances are assumed to be locally Euclidean.
β’ In practice we assume that Isometric transforms keep the topology (i.e. the connectivity) of underlying graph intact.
Graph Based Representation
π
π
π
Graph Based Representation
Popular Representation
β’ Mesh representation of 3D object is traditionally popular.
β’ It enables a direct application of Graph based tools for shape analysis tasks.
Spectral Graph Theory (SGT)
Spectral Graph Theory analyze properties of graphs via eigenvalues and
eigenvectors of various graph matrices.
β’ Builds on well studied algebraic properties of graph matrices.
β’ Provides a natural link between differential operators on continuous and discrete manifold representation.
β’ Allows to Embed a discrete manifold (graph) into an isometry invariant space.
β’ Provide intrinsic Spectral metric for isometry invariant distance computation.
References: β’ F. R. K. Chung. Spectral Graph Theory. 1997.
β’ M. Belkin and P. Niyogi. Laplacian Eigenmaps for Dimensionality Reduction and Data Representation. Neural Computation, 15, 1373{1396 (2003).
β’ U. von Luxburg. A Tutorial on Spectral Clustering. Statistics and Computing, 17(4), 395{416 (2007).*
β’ Software: http://open-specmatch.gforge.inria.fr/index.php.
Spectral Graph Theory (SGT)
π
π
π π―1
π―2
π―π π―π
SVD of Graph Matrices
to obtain [π―1, β¦ , π―π] Eigenvectors
Input β3 space Output βπ space
Isometry Invariant Space
Spectral Embedding
* Typically π β« 3, i.e. we can now embed the surface in a higher dimensional space.
Geodesic distances in the input space β3 can be approximated by Euclidean distances
in the new space βπ spanned by eigenvectors of certain Graph matrices.
Idea: Projection of higher dimensional data to a lower dimensional
subspace.
β’ π β βπ π β βπ where
πππ
ππ
β’ Linear Dimensionality Reduction using PCA computes π = πππ .
Recall Dimensionality Reduction (DR)
πβ²1
πβ²2
πβ²π πβ²π
Projection
π1
π2
ππ ππ
π βͺ π
Reconstruction
*Information is LOST
In Spectral Graph Theory β’ We consider eigen-decomposition of various Graph matrices of size (π Γ π)
where π is the number of data points.
β’ Matrices here are typically sparse with very few entries as non-zero.
β’ We minimize a different criteria that preserve geodesic distances on the surface patch.
Difference between SGT and DR
In Dimensionality Reduction β’ In linear DR (PCA) we consider eigendecomposition of Scatter/Covariance
matrix of size (π Γπ), where π is the original dimension of input data.
β’ Matrices here are typically full with all entries as non-zero.
β’ We find a least square fitting of the data.
Graph Matrices
Basic Graph Notations β’ Consider a simple graph with no loops and multiple edges π’ = π±, β° :
π± = π1, β¦ , ππ as vertex set with its cardinality i.e. (number of vertices) π± = π.
β° = ππ,π as edge set with each element ππ,π representing an edge between two adjacent
vertex ππ β½ ππ .
Adjacency and Degree Matrix β’ Adjacency matrix of graph π’ is a symmetric matrix :
π = ππ,π πΓπ with ππ,π =
ππ,π if ππ,π β β°
0 if ππ,π β β°
0 if π = π
ππ,π is a weight assigned to each edge and in case of unweighted graph ππ,π = 1.
β’ Degree matrix of graph π’ is :
π = [ππππ πΉπ ]πΓπ where πΉπ = ππ,πβππ~ππ.
A graph π’ is connected if πΉπ > 0, βπ β 1: π
A graph π’ is regular if πΉπ = πΉπ , βπ βπ β {1: π}
π1 π2
π3
π4 π5
π1,5
A graph with 5 vertices and 6 edges.
π =
0 0 1
0 0 1
1 1 0
0 0 1
1 0 1
0 1 0 0 1 1 0 1 1 0
, π =
2 0 0
0 1 0
0 0 4
0 0 0
0 0 0
0 0 0 0 0 0 2 0 0 3
β’ Eigen-decomposition of Adjacency matrix π yields π smooth eigenvectors that can be seen as discrete functions defined on graph vertices.
π = ππ²ππ with π = [π―1, β¦ , π―π] as matrix of eigenvectors.
Hence π―π = ππ π1 , β¦ , ππ ππ π»= π£1
π , β¦ , π£ππ π»
Discrete Functions on Graph
Definition β’ Let π be a smooth real valued function on graph π’ such that π:π± β β.
It assign a real number to each node of the graph.
A discrete vector representation of a continuous function π is written as π β βπ where π = π π1 , β¦ , π ππ
π».
π π1 π π2
π π3
π π4 π π5
ππ π1 = π£1π ππ π2 = π£2
π
ππ π5 = π£5π
ππ π4 = π£4π
ππ π3 = π£3π
Graph Matrix as Discrete Operator
Adjacency Matrix as an Operator β’ Adjacency matrix π can be viewed as an operator.
π = ππ where π ππ = π ππ βππ~ππ
β’ In quadratic form we can write.
ππππ = π ππ π ππ ππ,π
π π1π π2 π π3 π π4 π π5
= π = ππ =
0 0 1
0 0 1
1 1 0
0 0 1
1 0 1
0 1 0 0 1 1 0 1 1 0
π π1 π π2 π π3 π π4 π π5
=
π π3 + π π5 π π3
π π1 + π π2 + π π4 + π π5 π π3 + π π5
π π1 + π π3 + π π4
π π1 π π2
π π3
π π4 π π5 π π4 = π π3 + π π5
π π2 = π π3
π π2 = π π1 + π π2 + π π4 + π π5
Discrete Laplace Operator
Laplacian Matrix β’ Laplacian matrix π of graph π’ is a π Γ π symmetric matrix :
β’ This matrix can also be viewed as an operator :
π = ππ where π ππ = π ππ β π ππ βππ~ππ
β’ In quadratic form we can write:
ππππ = π ππ β π ππ 2
βππ~ππ
π = π β π
π π1 π π2
π π3
π π4 π π5 π π2 = π π3 + π π5 β 2π π2
Continuous V/s Discrete Laplace
Local Parametrization
55
Continuous V/s Discrete Laplace
π»π π =π
ππ’π π +
π
ππ£π π π»π ππ = π ππ β π ππ
Derivative vector in Tangent Space Directional Derivative w.r.t. edge ππ,π
βπ π = div π»π π = π». π»π π
βπ π =π2
ππ’2π π +
π2
ππ£2π π
Laplace operator computed at π
Laplace operator computed at ππ
βπ ππ = div π»π ππ = π»ππ»π ππ
ππ ππ = π ππ β π ππ βπ ~ π
Discrete Laplace Operator
Laplacian Operator
β’ Minimization of quadratic form ππππ == π ππ β π ππ 2
βππ~ππover π
yield a smooth function that maps neighboring vertices of π’ together.
Hence it is desired that π ππ and π ππ are mapped closer on the real
line.
β’ One important consequence of this is that if a graph is not regular (i.e. not uniformly connected) then a set of stongly connected vertices will be mapped closer as compare to set of weakly connected vertices.
β
Laplacian Eigenvectors
Laplacian Eigenvectors β’ Minimization of quadratic form ππππ over π can be written as Rayleigh
quotient :
β’ The solution of minimization is a family of smooth orthonormal functions i.e.
eigen-functions of π matrix corresponding to increasing eigenvalues.
β’ ππ = π²π with π = [π―1, β¦ , π―π] and π² = ππ’ππ π1, β¦ , ππ such that
0 = π1 < π2 β€ π3β¦ β€ ππ.
β’ ππ = π, π1 = 0 (If graph is connected). Trivial solution as constant vector.
β’ ππ―2 = π2π―2 is called the Fiedler vector.
β’ βπ β {2: π}, π―πTπ = π by orthonormal property of eigenvectors.
β’ Hence, π―π π π=1:π = 0.
minπ
ππππ
πππ
Laplacian Eigenvectors
π―π π―π
π―π
π―π
Sharma et. al. Symp. on Manifold Learning 2009
Types of Laplace Discretization
Binary Weighting (Non Geometric) β’ Weights of adjacency matrix are set to 0 or 1, i.e. all edges are
equally weighted. Also known as Umbrella operator.
Gaussian Weighting β’ Weights of adjacency matrix are set to π©ΞΌ,Ο βEuclid ππ , ππ ,
i.e. edges with small length are weighted more than the larger once.
Cotangent Weighting β’ Weights of adjacency matrix are computed in terms of
cotangent of angles of triangle in the triangulated mesh, i.e. areas with higher curvature are weighted more.
π = ππ where π ππ = πππ π ππ β π ππ βππ~ππ
Discrete Laplace operators: No free lunch, M. Wardetzky, S. Mathur, F. KΓ€lberer and E. Grinspun, In SGP 2007
Laplacian Embedding
Spectral Metric
Distance computation on surface β’ Given a 3D shape represented as π β β3 and its π-dimensional
embedding as π = [π―2, β¦ , π―π+1]T.
β’ Each point π±π β π is represented as ππ = π―2 π , β¦ , π―π+2 π T.
β’ Geodesic π±π, π±π β ππ β ππ = π―π π β π―π π 2π+1π=2 .
ππ β ππ
Geodesic π±π, π±π
Spectral Matching
Mateus et. al. CVPR 2008
Questions?