Interactive Texturing on Objects in Images via a Sketching Interface

47
Interactive Texturing on Objects in Images via a Sketching Interface Kwanrattana Songsathaporn The University of Tokyo Henry Johan Tomoyuki Nishita The University of Tokyo Nanyang Technological University

description

Interactive Texturing on Objects in Images via a Sketching Interface. Kwanrattana Songsathaporn. The University of Tokyo. Henry Johan. Nanyang Technological University. Tomoyuki Nishita. The University of Tokyo. Overview. Background & System Overview Related Work - PowerPoint PPT Presentation

Transcript of Interactive Texturing on Objects in Images via a Sketching Interface

Interactive Texturing

Correcting Perspective ProjectionSpecify depths at furthest and nearest point (pink points)Pink points are estimated by system

Furthest and nearest points (pink points) Interface for specifying relative depth Result from default perspective projection matrix Result from corrected perspective projection matrixAs shown in the lower image, the default perspective projection gives texture scale which is not consistent to the scene.To correct the texture scales which are the results of the perspective projection, users specify depths at the furthest and the nearest point from the image plane.Then the perspective projection is recalculated, and the texture scale is fixed.

With our interface users only have to specify depths at the furthest and the nearest point on the objects in images, the two points are estimated automatically to save users from the complication.For more detail about the correction and the estimation of the furthest and the nearest points please refer our paper.

24OverviewBackground & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkThe presentation is about 20 minutes in this order.2Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewFirst, I would like to explain about the background of the research and generally about our proposed system.3Paste textures on images, and maintain underlying shapes of objects

Applications in digital image/video manipulationResearch BackgroundPaste textures on blank areaPaste textures on textured area

Texturing on objects in images is a problem about how to paste textures on the objects in the images while maintaining the shapes of the objects as shown in these images.

On the images we can see that after the areas were pasted with textures, the creases and folds of the objects are still visible.

It is an interesting problem since its application benefits in digital image and video manipulation.4Scope & Features(1/3)Interactive system for pasting texturesInput image: Drawings and Photographs

DrawingPhotographIn this research we propose an interactive system for pasting textures in images.And, the input image for texturing can be either drawings or photographs.5Scope & Features(2/3)Our system supports textures regardless of their patterns

http://en.wikipedia.org/wiki/Texture_synthesis

(upper) Textures with pattern(lower) Texture without patternTexture images can be classified by their patterns, however our system supports texture images regardless of the patterns.6Scope & Features(3/3)Two features to make texturing easierSystematic occlusion handling: create occlusion with one piece of textureSystematic perspective handling: consistency to surrounding

OcclusionPerspective

Moreover, we proposed another two features which is systematic occlusion handling and perspective handling, to make the texturing in images easier.Occlusion handling lets users create occlusion effect on object with one piece of texture. Some parts of the texture in the lower left image is occluded as the curtain is folded.And, perspective handling let users correct the texture scale so it becomes consistent to the scene as shown in the lower right image. This saves users from manual texture scale editing.7Input imageOutput imageTexture imageUser specifications

System FrameworkTexture mapping is based on normal vector field=> independent of patternsObjects Normal Vector Field ConstructionTexture Coordinate CalculationSpecifications using interfaceVideoThe framework of our system can be explained with this diagram. First, the normal vector field of the texturing object is constructed, then the texture coordinate calculation is performed based on the obtained normal vector field. So, that the texturing result depicts the shape of the objects.Since our texture mapping is independent of the patterns, it lets our system supports any textures.

In order to construct the normal vector field of the object being textured, we let users make some specifications with our user interface. And, the system will interpret those specifications, and construct the normal vector field.

(click)This image shows the user specifications for normal vector field construction. And, next I will show a video clip about texturing using our system.

8Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture ParameterizationResultsConclusions and Future WorkOverviewNext, I will present some of the related work.9Texturing in Images(1/3)Liu et al., SIGGRAPH 2004Users align vertices on grid mesh with texels to replace texturesNot support input objects without texturesA lot of user interactions

(b) Aligned vertices to texels(a) Default grid mesh

Liu et al, proposed a grid mesh interface for replacing textures in images. Users replace textures by aligning vertices on the grid with the texels of the original textures. Thus, it is necessary that the input must have near-regular pattern on the objects, and aligning vertices and texels requires a lot of user interactions.10Texturing in Images(2/3)Fang and Hart, SIGGRAPH 2004Synthesize texture on objects in images Construct normal vector field with shape-from-shadingConstrained by the limitation of shape-from-shadingNot interactive

Input image

Constructed normal fieldOutput imageFang and Hart proposed a system that pastes synthesized textures on the objects in the images.The system uses shape-from-shading algorithm to automatically construct the normal vector fields of the objects. Thus, it requires less user interactions comparing to Liu et al.s system.However, the system is not interactive and constrained by the limitations of shape-from-shading algorithm.11Texturing in Images(3/3)Winnemller et al., EGSR 2009Use Diffusion Curves (DC) to designNormal vector fielduv mapConstruct normal vector field without shadingRequire a lot of user interactions

Designed normal vector fieldPaste textureAdd shadingWinnemoller et al. proposed a system that let users design normal vector fields and uv map with diffusion curves to paste textures. Although this system is not constrained by limitations of shape-from-shading algorithm, the texturing process requires a lot of interactions.12Comparison of MethodsObject without textureObject with textureInput without shadingTexture without patternOcclusion/perspectivehandlingLiu et al., 2004Fang and Hart, 2004Winnemller et al., 2009Our systemsupportwith limitationsnot support

This table shows comparison of our system and the systems proposed by Liu et al., Fang and Hart and Winnemoller et al.

Liu et al.s interface depends on the pattern of the original textures, thus it does not support input image which objects are not already textured.Fang and Harts system based texture mapping on normal vector field, thus it supports texturing objects without original textures, however the shape-from-shading algorithm is sensitive to the pattern on the input image.Winnemoller et al.s system has no input image limitations, however supported textures are limited as the results of manual UV editing.As you can see, our system supports all of these aspects, and only our system provides occlusion and perspective handling.13Wu et al., SIGGRAPH 2007Sketch-based interface: Shape paletteTransfer normal vectors from shape palettesEasy-to-understand interfaceNormal Construction Interface

Specifications to transfer normal vectors3D model from normal map

We choose to construct the normal vector field using user interface instead of depending on shape-from-shading algorithms since the algorithm requires shading information which is usually not available in the case of drawing images.

Wu et al., proposed a sketch-based interface called shape palette. Shape palettes concept is to let users transfer normal vectors from the palettes by drawing matching strokes.We implement a sketch-based interface which is similar to shape palette since its concept is easy to understand.

14Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewIn the next section, I will introduce our proposed user interface.15Two-Phase User InterfaceObjects Normal Vector Field ConstructionTexture Coordinate CalculationAs stated that our proposed systems framework composes of two phase, the proposed user interface can be divided into two parts corresponding to each phase.16Two-Phase User InterfaceThe first phase of the interface focuses on the construction of the normal vector field.17Sparse Normal Vector Specification

Convex paletteConcave paletteInput imageNormal vector field construction using our interfaceThis image shows the overview of normal vector field construction using our interface. The two spheres on the right side are shape palettes.For ease of specification we provide a convex palette which is the outer surface of a sphere, and a concave palette which is the inner surface of a sphere.

We call the red loop with no.2 on the input image the silhouette line of the object, this line marks the boundary of the texturing region.

18Meaning of PalettesConvex paletteImaginary 3D image of convex paletteImagine that texturing region and shape palettes are 3D objects

Monitor screen is image plane

yxzBy imagining that palettes and objects being textured are 3D objects, and the monitor screen is the image plane,when users draw the blue stroke on the palette the normal vectors along the blue stroke could be depicted as the orange arrows.These normal vectors are then transferred to the pixel on the input image where a matching stroke of the blue stroke are drawn.

This process results in normal vectors are known at some pixels on the input image. To obtain the complete normal vector field of the objects, an energy function associated with the normal vector field is optimized.19Two-Phase User InterfaceBesides, transferring normal vectors from shape palettes, user can design the occlusion on the objects.20Split Stroke SpecificationDiscontinuity on the normal vector fieldOcclusion and edgeSplit single region into subregionsInitial normal vector field of each subregion can be specified separately

(a) Specify texturing region(b),(c) SubregionsThe energy function associated with the normal vector as I will show in the next chapter, try to eliminate discontinuity in the normal vector field.However, in certain case, discontinuity is desirable. Such cases are occlusion and edge.Thus we let users draw special strokes called splitting strokes to mark the place that discontinuity appears. A splitting stroke splits a region into two subregions as shown in the lower image.When the region is split, the initial normal vector field of the subregions can be specified separately.21Distance Between SubregionsApproximate occluded region by specifying the distance between subregions > 0

(a) split region(b) Distance between subregions = 0(c) Distance between subregions > 0 (d) Black strip shows occluded part(e) Drag slide bar to increase distanceImaginary top view of the curtain

Then to approximate the occluded parts on the texture, besides specifying discontinuity with splitting strokes, distances between subregions also must be specified.Users drag the slide bar to increase the distances between the subregions, results in parts of the texture is occluded.22Two-Phase User InterfaceThe second part of the interface focuses on controlling the texture coordinate calculation.23Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewNext, I will show how the users specifications are interpreted and the results are produced based on the specifications.25Normal Vector Field Construction

yxzDirection of x, y and z componentsTo construct a normal vector field from the users specifications.We represent the normal vectors as shown above. 27Normal Vector Field ConstructionAnd, optimize an energy function associated with the normal vector field as shown in the green box.The p component and q component of the normal vectors are calculated separately, the green box shows only the calculation of the p component. To calculate q component, we replace p in the equation with q.28Normal Vector Field ConstructionAim to find this value of every pixelFirst, what we aim to obtain from optimization of this equation is p of every pixel.G represents the set of every pixel in the texturing region.29Normal Vector Field ConstructionSparsely specified normal vectorsP bar represents the sparsely specified normal vectors by transferring normal vectors from the shape palettes.S represents the set of pixel which normal vector is specified.30Normal Vector Field ConstructionMinimize error from the specified valueThe first term of the equation tries to keep the obtain value as close as possible to the specified value.31Normal Vector Field ConstructionEnforce smoothnessThe second term of the equation enforce smoothness 32Normal Vector Field ConstructionAdjacent pixels on the same subregion

Notice that the smoothness is enforce on the same subregion only, thus the discontinuity is visible along the split stroke.33Normal Vector Field ConstructionMinimize curvatureAnd, the last term of the equation tries to minimize the curvature.34Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewNext, I will talk about the algorithm used to map textures.35What We Know?Normal vector field of the objects (Normal vector field construction)

Corrected perspective projection (Relative depth specification)

Next, texture mapping

Up to this point. We already have the normal vector field of the objects constructed from the users specifications, and we also have the corrected perspective projection matrix.

Now we can perform the texture mapping to produce the results.36Texture Coordinate CalculationGoal: Find texture coordinate (u,v) of every pixels in texturing regionSubregion-by-subregion, flood-filled manner from pinpoint

p: (u,v) knownr: (u,v) unknownThe texture coordinate calculation or (u,v) coordinate is performed subregion-by-subregion in a flood-fill manner from the pinpoint pixel on the texturing region.

The lower image shows how texture coordinates are calculated between two adjacent pixels. p and r are adjacent pixel on the input image as they are separated by 1 pixel distance.Assume that texture coordinate at pixel p is already known. We calculate texture coordinate at pixel r, byFirst, we inverse the projection of the pixel p and pixel r, this grants us the coordinates in 3D as p and r. The inverse projection matrix in this process is already corrected as a result of relative depth specification.

Then we can construct a vector from p to r shown as yellow vector in the image.To obtain the coordinate in 2D plane which reserves both scale and the direction of the yellow vector,we rotate the yellow vector to the tangent plane at p which constructed with the normal vector of pixel p.

The result of rotation is the black-dashed vector. Finally the new texture coordinate at pixel r is calculated by adding the black-dashed vector to the known coordinate at pixel p.37As the calculation continuesInitial stateFlood fill

p: (u,v) knownr: (u,v) unknownArithmetic weighted meanAs stated, the calculation starts at the pinpoint of the texturing region, then it progress in a flood-fill manner until every pixel in the region is covered.As the calculation continues, it is possible that more than one adjacent pixel of r has a calculated texture coordinate. In such case, we perform texture coordinate calculations between pixel r and each adjacent point separately. Then we find the arithmetic weighted mean of the calculated results.

38Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewNext, I would like to show some of our results.39Results(1/2)Interaction time (mins)Winnemller et al. : 60Our system: 30

(a) Input(b) Retextured image(c) TextureWe compare the texturing time using our system to the texturing time using Winnemller et al.s system as written in their paper.In this image the user pasted the texture on the sari in (a) with the texture image in (c).The image (b) is the result of retexturing using our system.

Winnemller et al.s interaction time of this image is 60 minutes while the user retextures the same image with 30 minutes using our system.

40Results(2/2)Interaction time (mins)Winnemller et al.: 30Our system: 16

(a) Input(b,c) Retextured imageIn this image, the texture of the dress in input image (a) is pasted using our system and shown in (b and c).Comparing to Winnemller et al.s interaction time, 30 minutes, texturing using our system took 16 minutes.

Next, I will show the movie clip of the making of this result.41LimitationDistortion cumulates at distant area from pinpoint (red point)

Distortion is visible in the green box areaSince our texture coordinate calculation is performed in a flood-fill manner, the error from the calculation accumulates as the calculation moves away from the pinpoint. This leads to the error being more clearly visible at the distant area from the pinpoint as shown in the image.42Background & System OverviewRelated WorkProposed User Interface for TexturingUnderlying AlgorithmsNormal Vector Field ConstructionTexture Coordinate CalculationResultsConclusions and Future WorkOverviewFinally, I will conclude todays presentation and discuss about the future work.43ConclusionsInteractive system for texturing objects in imagesIndependent of texture patternsSystematically handle occlusionSystematically handle perspectiveReduce user interactionsProduce results with reasonable shorter timeIn conclusion, we proposed an interactive system for texturing objects in images.Our system supports input textures regardless of their pattern.Systematic occlusion and perspective handlings are provided in our system which reduces users interaction, effort and time.

Comparing to the previous work, our system produces results in a reasonable shorter time.44Future WorkIntuitive user interface that handles complex occlusion systematically

User studyHow precisely could users perceive normal vectors on objects?How precisely could users specify normal vectors using our interface?Photograph shows complex occlusion

Our occlusion handling lets users create the occlusion with one piece of texture, and works well with simple occlusion. However, for an object such as shown in this image, it is unavoidable that the object must be textured with several pieces of textures. For future work, we would like to develop an intuitive user interface that can handle complex occlusion systematically.We also would like to conduct a user study about how precisely could users perceive normal vector direction from a single image, and how precisely could users specify normal vectors using the our interface45Thank You

Q & A46As the calculation continuesPosition of adjacent pixelsPosition of pinpointp: (u,v) knownr: (u,v) unknown

In this notation, Xj denotes the position on the image of the adjacent pixel, and the x0 denotes the position on the image of the pinpoint.

47