LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop,...

10
LTouchIt: LEGO Modeling on Multi-Touch Surfaces Daniel Mendes IST/Technical University of Lisbon [email protected] October 2011 Abstract Presently, multi-touch interactive surfaces have widespread adoption. Taking advantage of such devices, we present an interactive LEGO application, developed accordingly to an adaptation of building block metaphors and direct multi-touch manipula- tion. Although touch-based interaction has evolved in the past years, manipulation of 3D objects can not be used without know-how of CAD and 3D modeling software. In this work, we explore how 3D object ma- nipulation can be simplified and made available for most users. Our solution (LTouchIt) allows users to create 3D models on a tabletop surface. To prove the validity of our approach, we compared LTouchIt with two LEGO applications, conducting a user study with 20 participants. The results suggest that our touch- based application can compete with existing mouse- based applications. 1 Introduction Multi-touch surfaces introduced new interaction paradigms, different from those provided by tra- ditional interfaces which rely on Windows, Icons, Menus and Pointing devices, denoted as WIMP in- terfaces [15]. Although touch-based interaction has evolved in the past years, manipulation of 3D objects can not be used without know-how of CAD and 3D modeling software, making them unsuitable for en- tertainment. In this work, we explore how 3D object manipulation can be simplified and made available for most users. Therefore, we chose the LEGO scenario, which is familiar to users of all ages. Nevertheless, we expect our findings to be of value to general applica- tions that require direct manipulation of 3D content. For most people, LEGO stands for more than a toy manufacturer. Many recognize it as the playful plastic blocks (bricks) that echo onto our childhood imagination. We envision that multi- touch interaction can surpass existing mouse-based LEGO applications for educational and entertaining purposes, namely for exhibition, public display and museums. We present an interactive LEGO application, that supports bimanual multi-touch input to create 3D models on the tabletop surface without expertise on 3D or CAD software. Throughout the following sections we present an overview of existing virtual LEGO applications and research on 3D object manipulation for multi-touch surfaces. We follow with a detailed description of our solution, one that we believe can address 3D ma- nipulation and still remain fun for building virtual LEGO models. Then, an evaluation methodology is described where our prototype is compared against two existing applications, which is followed by an analysis and discussion of user evaluation sessions. Lastly, we draw conclusions from our research and pinpoint possible ways to improve upon this work. 2 Related Work As multi-touch surfaces become more available for ex- hibition and learning contexts, we propose an interac- tive LEGO application that takes 3D manipulation in consideration and remain coherent with actions avail- able in other virtual LEGO solutions. Therefore, we review existing LEGO applications from a critical per- spective, drawing conclusions for our proposal. To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge has been subject of re- search in Human Computer Interaction. In this sec- tion, we also present an overview of the field, focus- ing on metaphors that provide bimanual manipula- tion supported by multi-touch surfaces. 2.1 LEGO Applications Following the technological advances we have been witnessing, some toys have been ported to the digi- tal world. LEGO construction bricks are no excep- tion and, currently, several applications allow to cre- ate virtual LEGO models. However, the majority of these applications are mouse-based, something we be- lieve that diminishes the fun-factor in such contexts. We present a survey on digital LEGO applications, focused on both popular applications and innovative examples. LEGO Digital Designer (LDD) 1 , is a propri- etary application of the LEGO Company. Models are created in a 3D environment represented through a visible auxiliary grid, resembling a traditional LEGO 1 ldd.lego.com, last accessed on October 13, 2011. 1

Transcript of LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop,...

Page 1: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

LTouchIt: LEGO Modeling on Multi-Touch Surfaces

Daniel MendesIST/Technical University of Lisbon

[email protected]

October 2011

Abstract

Presently, multi-touch interactive surfaces havewidespread adoption. Taking advantage of suchdevices, we present an interactive LEGO application,developed accordingly to an adaptation of buildingblock metaphors and direct multi-touch manipula-tion. Although touch-based interaction has evolvedin the past years, manipulation of 3D objects can notbe used without know-how of CAD and 3D modelingsoftware. In this work, we explore how 3D object ma-nipulation can be simplified and made available formost users. Our solution (LTouchIt) allows users tocreate 3D models on a tabletop surface. To prove thevalidity of our approach, we compared LTouchIt withtwo LEGO applications, conducting a user study with20 participants. The results suggest that our touch-based application can compete with existing mouse-based applications.

1 Introduction

Multi-touch surfaces introduced new interactionparadigms, different from those provided by tra-ditional interfaces which rely on Windows, Icons,Menus and Pointing devices, denoted as WIMP in-terfaces [15]. Although touch-based interaction hasevolved in the past years, manipulation of 3D objectscan not be used without know-how of CAD and 3Dmodeling software, making them unsuitable for en-tertainment. In this work, we explore how 3D objectmanipulation can be simplified and made available formost users. Therefore, we chose the LEGO scenario,which is familiar to users of all ages. Nevertheless, weexpect our findings to be of value to general applica-tions that require direct manipulation of 3D content.

For most people, LEGO stands for more thana toy manufacturer. Many recognize it as theplayful plastic blocks (bricks) that echo onto ourchildhood imagination. We envision that multi-touch interaction can surpass existing mouse-basedLEGO applications for educational and entertainingpurposes, namely for exhibition, public displayand museums. We present an interactive LEGOapplication, that supports bimanual multi-touchinput to create 3D models on the tabletop surfacewithout expertise on 3D or CAD software.

Throughout the following sections we present anoverview of existing virtual LEGO applications andresearch on 3D object manipulation for multi-touchsurfaces. We follow with a detailed description ofour solution, one that we believe can address 3D ma-nipulation and still remain fun for building virtualLEGO models. Then, an evaluation methodology isdescribed where our prototype is compared againsttwo existing applications, which is followed by ananalysis and discussion of user evaluation sessions.Lastly, we draw conclusions from our research andpinpoint possible ways to improve upon this work.

2 Related Work

As multi-touch surfaces become more available for ex-hibition and learning contexts, we propose an interac-tive LEGO application that takes 3D manipulation inconsideration and remain coherent with actions avail-able in other virtual LEGO solutions. Therefore, wereview existing LEGO applications from a critical per-spective, drawing conclusions for our proposal.

To build LEGO models on a multi-touch tabletop,the manipulation of virtual 3D objects challenge mustbe addressed. This challenge has been subject of re-search in Human Computer Interaction. In this sec-tion, we also present an overview of the field, focus-ing on metaphors that provide bimanual manipula-tion supported by multi-touch surfaces.

2.1 LEGO Applications

Following the technological advances we have beenwitnessing, some toys have been ported to the digi-tal world. LEGO construction bricks are no excep-tion and, currently, several applications allow to cre-ate virtual LEGO models. However, the majority ofthese applications are mouse-based, something we be-lieve that diminishes the fun-factor in such contexts.We present a survey on digital LEGO applications,focused on both popular applications and innovativeexamples.

LEGO Digital Designer (LDD)1, is a propri-etary application of the LEGO Company. Models arecreated in a 3D environment represented through avisible auxiliary grid, resembling a traditional LEGO

1ldd.lego.com, last accessed on October 13, 2011.

1

Page 2: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 1: LEGO Digital Designer.

baseplate, as depicted in Figure 1. LEGO bricks aredisplayed, through their previews, in a browsable list.Interaction with on-screen widgets allow the user toorbit the camera around a point, without tilting (noroll); zoom also relies on widgets, while panning re-quires a mouse movement combined with keyboardshortcut. Manipulation of bricks relies on an efficientsystem of connecting parts, based on the grid concept.Therefore, the translation of a brick is performed ex-clusively in the grid plane, to which it adapts. If theuser translates a brick onto an existing one, the newbrick is placed above. Rotation is restricted alongtwo axes, which are perspective-dependent; thus, itrequires reasoning regarding which camera positionis best for the desired rotation effect.

Mike’s LEGO CAD (MLCad)2 is a ComputerAided-Design (CAD) system for building virtualLEGO models. It utilizes four viewports, each pro-viding a different view of the 3D model, as depictedin Figure 2. Both perspective (non-editable, onlyfor visualisation) and orthogonal views (for editingpurposes) are supported. As expected, being CAD-based, it does not suit all users, in particular thosethat are not acquainted with the paradigms of typicalCAD software. Thus, as our tests will show, it hard-ens learning curve for building virtual LEGO models.It integrates the LDraw3 open-source parts library,which is widely used by the community of LEGO afi-cionados. Bricks are displayed in two browsable pan-els, a list of brick names (textual) and another withtheir previews (graphical). Unlike most LEGO ap-plications, the environment of MLCad does not pro-vide an auxiliary grid, and there is no restriction forthe position of the bricks. Regarding manipulationof bricks, translation occurs in a plane parallel to theselected orthogonal view; while rotation can be exe-cuted in any of the three axes (roll, pitch and yaw).

LeoCAD4, much like the previous system, also uti-lizes the CAD paradigm and LDraw library, but al-

2mlcad.lm-software.com, last accessed on October 13,2011.

3www.ldraw.org, last accessed on October 13, 2011.4www.leocad.org, last accessed on October 13, 2011.

Figure 2: Mike’s LEGO CAD.

lows manipulation in both perspective and orthogonalviews. Unlike the remainder applications, searchingfor bricks is accomplished through a text-based list,since the graphical preview only becomes availableonce a brick has been selected. From an interactionperspective, LeoCAD requires mode switching via in-terface buttons, e.g., the user has to swap explicitlyfrom rotation to translation. By default, brick trans-lation is performed in a horizontal plane; hence, ifthe user intends to move the object vertically, it isrequired to drag the corresponding axis (each objectdisplays a 3D axis widget) instead of simply draggingthe object. In order to rotate bricks, it follows theapproach of Shoemake [14], displaying rotation han-dles. Once grabbed (via mouse) each of these threehandles, restrict the rotation effect to a single plane.Furthermore, the camera can be rotated around itsown axis, tilted (roll) or orbited around a point.

Moving away from WIMP metaphor, LS-ketchIt [13] is a calligraphic tool, that allows LEGOmodels to be built through sketching. The system isbuilt upon LeoCAD, and shares many of its features,although the retrieval and selection of bricks isachieved by drawing a sketched-up version of thedesired brick. Given the brick outline (sketch), thesystem presents a list of suggestions and allows theuser to make modifications to the brick, refreshingthe suggestions accordingly to that modification.The authors suggest that brick retrieval time can bereduced through sketching when compared to LEGOCAD applications.

Recently, boosted by the spread of handheldmulti-touch devices (such as the iPad/iPhone), theBlocks!!5 application was commercially released. Inthis application, camera manipulation is performedwith one touch for pan and two for rotation and zoom(pinch zoom). New bricks can be added by select-ing them from a list and, then, touching where thebrick should be placed. Dragging an object movesit on the grid plane. When objects collide, the se-lected brick is stacked on top of the other. Further-

5itunes.apple.com/us/app/blocks/id390088835, last ac-cessed on October 13, 2011.

2

Page 3: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

more, bricks can be rotated using two fingers: thefirst fixes the object; and the second supplies the ro-tation direction. Multiple taps on a brick cause itto rotate, by 90 degrees intervals for each tap, and,when a full rotation is achieved, the brick is removedfrom the scene. Since this application is targeted atsmall multi-touch surfaces, it does not account forwhole-hand and bi-manual interaction. A compar-ison against the remainder applications shows thatBlocks!! is only aimed at simple LEGO models, sincethe brick selection and overall features are quite lim-ited.

A critical perspective of the aforementioned appli-cations allows us to conclude that, currently, there isno virtual LEGO application suited for large multi-touch tabletops, one that we believe will be more ad-equate for exhibition and educational purposes, whileproviding natural and pleasing interaction.

2.2 Tabletop 3D Object Manipulation

Research on object manipulation for tabletops startedby studying several techniques for rotation and trans-lation of 2D objects, using multi-touch gestures [5].One of these techniques, the two-point rotation andtranslation, became the de facto standard, and is nowpopular among several multi-touch applications, evencommercial ones. This technique uses two points ofcontact: the first translates the object; and the secondrotates the object around the first touch. When com-bined with the variation of finger distance, it can beused to scale the object; also known as rotate-scale-translate (RST) or “pinch” when only concerning thescale or zoom effect [17].

As far as three-dimensional manipulation is con-cerned, various approaches have been proposed. Han-cock et al. [3] suggested a set of guidelines to developmulti-touch interfaces, establishing a visual and phys-ical link with the object and 3D visual feedback. Fur-thermore, these authors present a study regarding themanipulation of 3D objects using one, two and threetouches simultaneously. Their results show that withone touch it is possible to extend the Rotate ’N Trans-late (RNT) algorithm [7] to the third dimension. Thissuggests that geometric transformations (rotate, scaleor translate) should be relative to the touched pointand not to the center of the object. Also, in the sin-gle touch approach, the rotation can be performedaround any of the three axes, while translation occursin a plane parallel to the visualization plane. The two-touch approach uses the first contact point to applythe RNT algorithm exclusively in the two dimensionsof the viewing plane, while the second touch allowsrotation around the two remaining axes and trans-lation in the third dimension (the one orthogonal tothe viewing plane). Lastly, the three-touch approach(which the authors denote as Sticky Fingers [4] - illus-trated in Figure 3), showed optimistic results, faringbetter than the other two. In this approach, by apply-ing the two-point rotation and translation techniquefor two-dimensional objects, it is possible to translate

Figure 3: Sticky Fingers and the Opposable Thumb.From [4].

the object in two dimensions and rotate around oneaxis. The scale is replaced by the translation in depthrelatively to the camera. For rotation, two touches onthe object define a rotation axis, while a third touch,denominated Opposable Thumb, allows the rotationaround that axis, providing six DOFs (degrees of free-dom).

For handling 6 DOFs simultaneously, Reisman etal. [12] introduced a method to extend the RST into3D. Using only direct touches on the object, a con-straint solver calculates the new position and orienta-tion of the object, maintaining the correspondence be-tween the positions of the fingers in the 2D screen andthe screen-space target positions, i.e., the 3D pointswhere the user touched the object.

Wilson [16] followed a different approach, in whichthere is no need to define a specific interactionthrough gesture recognition. His solution consists invirtual proxies of the user’s contact zone with the sur-face. The interaction between the proxies and thevirtual objects follows a physical model, which allowsthe manipulation of objects in the scene, e.g., trans-lating can be achieved by pushing the object fromone side. However, this approach does not addressdepth manipulation, since the objects are placed in ahorizontal plane.

Martinet et al. [8] proposes two techniques to trans-late 3D objects. The first extends the viewport con-cept found in many CAD applications (four view-ports, each displaying an different view of the model).Touching and dragging the object within one of theviewports translates the object in a plane parallel tothat view. Manipulating the object, with a secondtouch, in a different viewport modifies depth rela-tively to the first touch. For the second method,

3

Page 4: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 4: Illustration of the Z-Technique. From [8].

denoted as Z-technique (Figure 4), only one view ofthe scene is employed. In this technique, the firsttouch moves the object in the plane parallel to theview, while the backward-forward motion of a secondtouch controls the depth relatively to the camera po-sition. The authors preliminary evaluation suggeststhat users prefer the Z-technique.

Improving upon the Z-Technique, Martinet et al. [9]added the constraint solver described in [12] to per-form rotations, separating the control of translationsfrom rotations. With one direct touch on the objectthe user can move it on the camera plane, and througha simultaneous indirect touch its depth can be ma-nipulated. Also, with two or more direct touches theobject can be rotated. A comparison of this approachto Sticky Fingers and Screen-Space [12] showed thatseparation of translation and rotation can lead to bet-ter performance.

Cohe et al. [1] propose a widget for indirect manipu-lation of the object in 9 DOFs (three for translations,three for rotations and three for scales) separately. Itconsists of a wireframe box which the user can in-teract with, instead of the actual object itself. Theobject is translated by dragging the edge of the boxcorresponding to the desired axis. Dragging two op-posite edges scales the object. Rotation is definedby the direction of user motion, once the user startsdragging a face of the box.

The aforementioned work covers interaction with3D objects on multi-touch surfaces, but most are tar-geted at scenarios where the camera does not move,or at least does not rotate. Moreover, scenes withdifferent cameras, such as an isometric perspective,will require additional cognitive effort from the userin order to understand the plane in which the objectsare moving. It might be also difficult for a user tocomprehend which gestures are needed to translatean object to a desired position, e.g., to move the ob-ject in a horizontal plane. Besides, as stated in [9],its difficult to interact with small objects with multi-finger approaches, since there is no sufficient space formultiple direct touches on the same object.

Our work addresses these issues, since they arecommon challenges in 3D building block applications,

such as the LEGO scenario. We propose an applica-tion, with free camera manipulation, that provideseasy navigation and understandable 3D object inter-action (6 DOFs).

3 LTouchIt

We developed an application (LTouchIt) to build vir-tual LEGO models on multi-touch sensitive surfaces,based on the analysis of existing virtual LEGO ap-plications and the state of the art regarding objectmanipulation on tabletops, as previously described.

To start the development cycle, we conducted anevaluation with 21 users in order to identify the ad-vantages and drawbacks on three of the most com-monly used LEGO applications: LDD (LEGO DigitalDesigner), MLCad and LeoCAD (which are coveredin our related work). The results of this evaluation,as described in [10], served as guidelines throughoutthe development of our solution, one that we hope cansurpass the problems identified in existing solutions,and also suit entertainment and educational purposes.

3.1 Architecture Overview

Our application was developed accordingly to the ar-chitecture illustrated in Figure 5. While the bluemodules consist of existing solutions we adopted, thegreen modules were developed for our application.The first and most relevant is the Interaction Con-troller. This module analyses the movements of thefingers on the surface, which are tracked using CCV

Figure 5: LTouchIt architecture.

4

Page 5: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 6: The LTouchIt interface.

tbeta6, in order to identify the gestures made and thecorresponding actions. These actions are then trans-mitted to the other developed module, the LEGOModeller. This is the core of the application and it iswhere all of its logic is implemented. It also containsinformation about the LEGO bricks, the model beingbuilt and the camera.

All the data related to the scene, including therepresentation of the bricks, their position, orienta-tion and details of the visualization, is stored usingOpenSG7. This open source graph system undertakesthe conversion to OpenGL primitives and the render-ing process. The information regarding the geome-try of the bricks is gathered from the LDraw partslibrary. Using the LDraw standard, LTouchIt canopen construction files saved from other LDraw com-patible applications and vice-versa.

3.2 Interface

The interface of LTouchIt is depicted in Figure 6.The construction takes place in the center area on avisible grid. The grid adapts if necessary, expandingand contracting as bricks are spread throughout theconstruction area. It resembles a traditional LEGObaseplate, helping users to fit the bricks, which movein intervals (snapped) of one unit.

The right-side vertical panel holds the availablebricks. The user can search through the list by mov-ing it up or down. Within the panel, bricks are or-ganized into groups, making it easier to be retrieved.Also, a scroll bar indicator allows quick access of brick

6ccv.nuigroup.com, last accessed on October 13, 2011.7www.opensg.org, last accessed on October 13, 2011.

sub-groups by touching on the visual identification(an icon that depicts what type of bricks can be foundin the group).

To use a brick, the user can pick it from the list anddrag it to the desired position. Conversely, to removea brick from the construction site, the user can dragit back to the panel. Also, holding the finger on abrick in the panel will display more information, suchas the brick name and dimensions.

On the bottom there are two toolbars, which can beexpanded and closed. The left toolbar holds all theusual functionalities, such as: create a new model,save the current model and quit the application. Fur-thermore, it also allows to temporarily hide bricks orto show previously hidden bricks. The right toolbaris a color palette, that can be used to change the colorof a brick or the whole brick list.

A final analysis of our interface shows that table-top design guidelines have been taken into considera-tion, accounting for: maximizing the interactive area(77% of the area is dedicated for building the mod-els); dominant hand is taken into consideration (asit provides more ergonomic comfort throughout theinteraction); and graphical widgets have an appropri-ate size (enough to be touched and remain partiallyvisible, thus minimizing occlusion).

3.3 Interactions

To manipulate bricks, we aimed at a natural and fa-miliar scenario for LEGO users, moving further fromthe traditional selection concept - which is to drag ob-jects with a single touch. Instead, we used a “pick”metaphor to grab an object and then, without releas-

5

Page 6: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 7: (a) “Pick” gesture; (b) Arrows identifyinga vertical translation plane; (c) Changing to a hori-zontal translation plane.

ing, move it, as depicted in Figure 7. This gestureis resemblant of picking up a real LEGO brick andplacing it on the desired location. As our test sub-jects denoted in their commentaries, this gesture wasfound to be very familiar and understandable.

In order to assess the best approach for 3D ob-ject manipulation on multi-touch tabletops, with un-constrained viewpoints, we carried out a user evalu-ation with 20 users comparing several approaches fortranslation and rotation. In the following sections webriefly describe the comparison of these techniquesand draw conclusions upon the selected approach.Complementing the object manipulation, we presentour camera manipulation and cloning metaphor.

3.3.1 Object Translation

We developed three translation techniques: Orthog-onal uses a translation plane in which the normaldirection is closer to the view vector and orthogonalto one of the scene axes; Horizontal-Z moves the ob-ject accordingly to a horizontal plane and, by scrollinga second touch, changes the object depth relativelyto the camera (similarly to the Z-Technique); andPlane-Switch uses a horizontal translation planewhich, by tapping a second finger, changes to a ver-tical plane. For all techniques, dragging the objectwill move it in the current translation plane; thus,the user manipulates no more than two DOFs withone finger, which maintains a strict relation betweenthe two-dimensional input and the two-dimensionaltranslation.

We compared these approaches amongst each otherand with the Z-Technique. Test results showed thatusers experienced difficulties when translating objectsin a plane parallel to the view, suggesting that an or-thogonal (to one of the scene axes) translation planeis desirable. Also, we found that manipulating objectdepth with the scrolling touch was sometimes misun-derstood as a scaling gesture. Result analysis denotedOrthogonal and Plane-Switch as the more efficient

Figure 8: (a) Showing rotation handles; (b) Rotatinga brick.

approaches, although without statistically significantdifferences between them, which suggests that a com-bination of the two might result in the desired ap-proach (since they are combinable).

Our solution, combines the aforementioned ap-proaches into one (Grab’N Translate), as depictedin Figure 7. After picking up the object (Figure 7.a),the user translates it in the plane defined by the cam-era (Figure 7.b), allowing to take the most of the cur-rent view. If the user intends to change the trans-lation plane, no camera movements are required; in-stead, a tap with a second finger alternates betweena horizontal and a vertical plane (Figure 7.c). Visualfeedback is provided by four arrows and the shadow ofthe object. The arrows depict the possible directionsin which the object can move, which describes thecurrent translation plane. When the object intersectsthe construction grid the arrows become red.

3.3.2 Object Rotation

Concerning the rotation of objects, we developedthree techniques: Camera-Defined-Axis uses theRST applied to the current translation plane, i.e.,while moving the object, a second touch rotates theobject around the first; User-Defined-Axis is simi-lar to the Opposable Thumb, but the rotation axis isone of the scene axes; Rotation-Handles uses vir-tual handles (similar to object handles in [2]), the firsttouch selects the rotation axis and the second touchrotates the object. These techniques only allow tocontrol rotation over one DOF at a time, somethingwe found desirable in our evaluation of LEGO appli-cations.

User evaluation showed that Rotation-Handlesoutperforms the remainder approaches. Users seemto be more pro-efficient with virtual handles sinceit provides more comprehensible feedback of the ro-tation axis, while reducing time to change rotationaxis - because users do not need to change views, asobserved in Camera-Defined-Axis, or define theaxis manually, as required by User-Defined-Axis.Rotation-Handles is the technique used in our so-lution, complemented with a snap mechanism thatautomatically fits the brick to the nearest 90 degreesangle when the rotation handle is released. In orderto make the the handles visible, the user taps some-

6

Page 7: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 9: Gesture for cloning a brick.

where on the surface while holding the object withthe other finger, as illustrated in Figure 8.

3.3.3 Camera Manipulation

To manipulate the camera, our solution accounts fororbit, zoom and pan operations. One touch rotatesthe camera around the model; two touches zoom inand out; and four or more touches in close proximitymove the camera. It is possible to quickly re-centerthe camera by tapping the desired position while pan-ning. Furthermore, throughout observation we foundthat some users prefer to rotate the camera with twotouches, which was also included.

To provide a fluid interaction, we allow concurrentbimanual combination of camera rotation and objecttranslation. Thus, the user can change viewing anglewith one hand, while moving the picked object withthe other. Moreover, the translation plane will changeaccordingly to the new perspective.

3.3.4 Brick Cloning

To duplicate (or clone) a brick, preserving its colorand orientation, we developed a cloning metaphor:“touch and pick”, which is illustrated in Figure 9.The gesture consists of pointing to the object that wewant to duplicate with one finger of the non-dominanthand and, then, picking the same object with a pickgesture using the dominant hand.

Once the “pick” gesture is completed, the new brickis created and placed underneath the dominant hand,allowing the user to move it to the desired position.Since we found that it can be difficult to touch andpick small sized objects, we allow to pick outside thebrick, as depicted in Figure 9. Furthermore, this en-ables users to create the cloned object directly on thedesired position.

4 User Evaluation

To evaluate our prototype application (LtouchIt),we compared it with two existing LEGO applicationsbased on the WIMP paradigm. We chose LEGO Dig-ital Designer (LDD) and MLCad, since our prelim-inary evaluation [10] suggested they were the mostpleasing for users. Furthermore, LDD is a proprietaryLEGO tool, while MLCad is a community-driven ef-fort.

Figure 10: User interacting with LTouchIt.

4.1 Participants

Evaluation was carried out by 20 users (15 male andfive female; all right handed), ranging from 11 to 26years old (mean=23.5 years old). Concerning pre-vious experience with CAD or 3D modelling appli-cations, 10% of the users have some experience and20% are very experienced with such software. Only15% have never used a multi-touch device, while theremainder have experience with multi-touch smart-phones. None of the test subjects had used a LEGOmodelling application before.

4.2 Apparatus

Tests were conducted in a closed environment, featur-ing: a computer (with mouse and keyboard) runningLDD and MLCad; and our multi-touch enabled sur-face (1.58x0.87m optical tabletop), which is depictedin Figure 10. With the users’ permission, tests werevideotaped and comments were transcribed from au-dio recording.

4.3 Test Description

Tests were structured in four stages: a pre-test ques-tionnaire to establish user profile; briefing about testpurpose and training session; three tasks (one for eachapplication, each preceded by a demonstration of theapplication); followed by a questionnaire after com-pleting the task in each application. To ensure eventest distribution of the applications, we alternated theapplication order for each user.

Each test session was comprised of three situations:LDD, MLCad (both running on a desktop computerwith a mouse device) and LTouchIt (on the table-top). For each application the user was required tobuild the LEGO model depicted in Figure 11. Userswere given a sheet of paper, describing the desiredtask (step-by-step building instructions), resemblantof a construction guide that come with real LEGOmodel kits. Users were encouraged to study the modelbeforehand, although they were free to consult itthroughout the task.

7

Page 8: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

Figure 11: (a) The LEGO model; (b) Test briefing.

15:14 12:17

15:29

LTouchIt LDD MLCad

Figure 12: Average task completion time (mm:ss) perapplication.

The LEGO model (Figure 11) contains severalbricks of different types and colours, thus users wererequired to search through different groups and paintwith different colours. Also, the inclusion of repeatedbricks in the model allowed us to understand if usersprefer to pick the same brick from the list or toclone it. Finally, building this model requires rotatingbricks around all three axes.

All participants received a LEGO model kit, asgratitude for the time spent in our evaluation.

4.4 Results

We present three different perspectives on theanalysis of the results from our user study. Firstly,we present a quantitative analysis drawn from taskcompletion times in each application; we follow witha qualitative analysis based on the questionnaires;and finally, we discuss several observations capturedthroughout the test sessions.

4.4.1 Task Completion Time

For each application, we measured the time requiredto complete the construction of the model. The re-sults follow a normal distributed, accordingly to theShapiro-Wilk test. Thus, we subjected our results toan One-Way ANOVA test, which suggested that sta-tistically significant differences existed (F2,57=4.123,

LTouchIt LDD MLCad

Fun* 3,0 (1) 3,0 (2) 2,0 (2)

Move* 4,0 (1) 3,0 (2) 4,0 (1)

Rotate* 4,0 (1) 2,0 (1) 3,0 (1)

Pan 4,0 (1) 4,0 (1) 3,0 (2)

Orbit 4,0 (1) 4,0 (0) -

Zoom 4,0 (1) 4,0 (0) 4,0 (0)

Search* 4,0 (1) 3,0 (1) 2,0 (1)

Clone 4,0 (0) 4,0 (1) 4,0 (2)

Paint* 4,0 (0) 3,5 (1) 3,0 (1)

* indicates statistical significance

Table 1: Questionnaire results for each application(Median, Inter-quartile Range).

p¡.05). A Post-hoc Tukey HSD multiple comparisonstest revealed that the LDD was significantly differentfrom MLCad.

The average completion time for each application isdepicted in Figure 12. LDD, due to inclusion of colli-sion detection and brick adaptation systems, was thefastest. MLCad, although fast for some 3D expertusers, was generally the slowest. LTouchIt, despiteusing an interaction paradigm which participants arenot acquainted, and without the automatic brick fit-ting, was able to perform slightly faster than ML-Cad.

4.4.2 User Feedback

In the questionnaires, we asked users to classify, us-ing a 4 points Likert scale (1 - none, 4 - very), eachapplication regarding: how fun it was to use; howeasy it was to manipulate bricks (translation and ro-tation); camera control (pan, orbit and zoom); searchand retrieve bricks; clone bricks; and paint. The par-ticipants’ ratings are shown in Table 1. The WilcoxonSigned Rank Test was used to assess statistically sig-nificant differences.

Users strongly agreed that MLCad is less fun thanLTouchIt and LDD (Z=-3.043, p=.002 and Z=-2.586, p=.010). Concerning brick manipulation, usersstrongly agreed that MLCad is easier to use thanLDD (Z=-1.979, p=.048). The LDD brick adapta-tion system, although efficient in most cases, mightgive frustrating outcomes occasionally. For instance,when the user wants to place a brick side by side withanother, depending on the perspective, the applica-tion might place it above or below the first, whichis incorrect for the user. Also for brick manipula-tion, LTouchIt classification is very close to ML-Cad. Moreover, users strongly agreed that rotat-ing bricks is more difficult in LDD than in LTou-chIt or MLCad (Z=-3.125, p=.002 and Z=-2.045,p=.041). Participant commentaries suggest that thevirtual handles of LTouchIt are preferred over therotation buttons in the MLCad interface.

Regarding camera manipulation, none stood out.

8

Page 9: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

The gestures used in LTouchIt were able to competewith other applications solutions with no significantdifferences in users’ preference. Since MLCad doesnot allow camera rotation in the editable viewports,we did not include the orbit classification for this ap-plication.

To retrieve the desired bricks from list, usersstrongly agreed that LTouchIt was easier thanLDD and MLCad (Z=-3.493, p¡.001 and Z=-3.697,p¡.001). The same is observed for the user prefer-ences regarding colouring bricks (Z=-2.877, p=.004and Z=-3.166, p=.002), with LTouchIt above theremainder. Our clone feature showed less deviationthan LDD and MLCad, suggesting that the gesturewas found understandable by most users, even whencompared to the clone mode in LDD and the key-board shortcuts used in MLCad to “copy and paste”bricks.

4.4.3 Observations

We observed throughout user evaluation that orthog-onal viewports paradigm, which is employed in ML-Cad, is not natural for non CAD experts. Conversely,we witnessed that our solution simplifies 3D manipu-lations for non expert users.

In MLCad users shown no problems in rotatingbricks, since viewports allow the user to maintain dif-ferent views of the model. In this case, all interac-tions are 2D, because they take place on the view-ports, rather than on a perspective view. In LDDusers tend to have occasional problems with 3D rota-tions, because the rotation operation is perspective-dependent, and the available rotation axes are notidentified, which most users found frustrating. Oursolution goes one step further, by allowing 3D manip-ulation with free camera, but avoiding the aforemen-tioned issues.

Furthermore, user comments suggest that theLTouchIt pick metaphor tends to be more naturalthan the brick selection in LDD. In LTouchIt, oncea brick is picked, the user inherently knows that it isbeing held, since the two fingers are in contact andthe pick and move gestures are fluidly merged. Con-versely, LDD requires the user to click on the desiredbrick and, once again, click for placing it in the targetposition, thus, not providing tactile feedback that thebrick was held. This observation also conforms withthe known advantages of direct over indirect manip-ulation [6].

We observed that most users found our clone ges-ture to be a fast technique for speeding up construc-tions that often have repeated bricks, such as walls.

5 Conclusions

In this paper we presented a system for virtual LEGOmodelling, specifically designed for multi-touch table-top devices. These devices offer new interaction pos-sibilities, but also several challenges that must be sur-

passed. We focused primarily on multi-touch manip-ulation of 3D objects on a large surface, developing aset of gestures that can be applied into a variety ofresearch domains, outside the LEGO scope.

We conducted and evaluation with 20 users, com-paring our proposal with two LEGO applications. Re-sults suggest that our interactive application is in tunewith its competition, yet it provides a hands-on ex-perience, one that we believe to be more adequate forentertainment and educational purposes than exist-ing LEGO applications. Furthermore, from user com-mentaries and questionnaire analysis, we can concludethat most participants found it easy to manipulate 3Dobjects without expertise in CAD or 3D software.

Our findings are generalizable towards research oninteraction with 3D objects for multi-touch tabletops.Our tests showed that users are comfortable with sep-aration of rotation and translation into atomic ac-tions (as stated in [9]) rather than allowing com-bined actions. Also, these manipulations are sepa-rated in DOFs, which we found helpful for noviceusers. Translation can be performed on plane (twoDOFs) which maps directly to the device affordance,since tabletops offer bi-dimensional input (X-Y for thetouch coordinates). Whereas for rotation only onedegree is considered at a time, since users found mul-tiple DOFs to be more cognitively demanding. Thisseems to corroborate with [1, 9] as user agreeabil-ity over gestures for 3D rotation lean towards actionsthat manipulate one DOF at a time.

Furthermore, our scenario addressed manipulationof small objects on a low resolution input device (sincenot all tabletops are able to distinguish very close fin-ger touches). Methods such as Sticky Fingers [4] re-quire that three fingers are placed on an object, whichfor small objects can be problematic since the objectsize must be at least wide enough to accommodate allfingers in a comfortable position. The usage of han-dles overcomes this situation, since the manipulationis no longer directly on the object, as proposed for 2Dand 3D multi-touch interactions by [11, 1].

6 Future Work

We believe that our system can evolve from the cur-rent prototype to a fully fledged tool for LEGO fansof all ages. To achieve this vision, new features can bedeveloped, such as: collision detection (fitting brickstogether automatically); improving upon brick re-trieval (one that will fit into larger sets of data); sup-port all standard LEGO bricks; group selection (ma-nipulate multiple bricks at once); and finally, multi-user collaboration around the same tabletop. Thesefunctionalities raise a whole new set of challenges tobe tackled in the future.

References

[1] A. Cohe, F. Decle, and M. Hachet. tBox: A3D Transformation Widget designed for Touch-

9

Page 10: LTouchIt: LEGO Modeling on Multi-Touch Surfaces · To build LEGO models on a multi-touch tabletop, the manipulation of virtual 3D objects challenge must be addressed. This challenge

screens. In ACM CHI Conference on Hu-man Factors in Computing Systems, Vancouver,Canada, May 2011.

[2] B. Conner, S. Snibbe, K. Herndon, D. Rob-bins, R. Zeleznik, and A. Van Dam. Three-dimensional widgets. In Proceedings of the 1992symposium on Interactive 3D graphics, pages183–188. ACM, 1992.

[3] M. Hancock, S. Carpendale, and A. Cockburn.Shallow-depth 3d interaction: Design and eval-uation of one-, two-and three-touch techniques.In Proc. CHI 2007, pages 1147–1156, New York,NY, USA, 2007. ACM Press.

[4] M. Hancock, T. ten Cate, and S. Carpendale.Sticky tools: Full 6DOF force-based interactionfor multi-touchtables. In Proc. ITS, pages 145–152, 2009.

[5] M. S. Hancock, F. Vernier, D. Wigdor,S. Carpendale, and C. Shen. Rotation and trans-lation mechanisms for tabletop interaction. InProc. Tabletop, pages 79–86. IEEE Press, 2006.

[6] K. Kin, M. Agrawala, and T. DeRose. Determin-ing the benefits of direct-touch, bimanual, andmultifinger input on a multitouch workstation.In Proceedings of Graphics Interface 2009, pages119–124. Canadian Information Processing Soci-ety, 2009.

[7] R. Kruger, S. Carpendale, S. Scott, and A. Tang.Fluid integration of rotation and translation. InProceedings of the SIGCHI conference on Hu-man factors in computing systems, pages 601–610. ACM, 2005.

[8] A. Martinet, G. Casiez, and L. Grisoni. The de-sign and evaluation of 3d positioning techniquesfor multi-touch displays. In 3D User Interfaces(3DUI), 2010 IEEE Symposium on, pages 115–118. IEEE, 2010.

[9] A. Martinet, G. Casiez, and L. Grisoni. The ef-fect of dof separation in 3d manipulation tasks

with multi-touch displays. In Proceedings ofthe 17th ACM Symposium on Virtual RealitySoftware and Technology, pages 111–118. ACM,2010.

[10] D. Mendes and A. Ferreira. Virtual lego mod-elling on multi-touch surfaces. In WSCG’2011Full Papers Proceedings, pages 73–80, 2011.

[11] M. Nacenta, P. Baudisch, H. Benko, and A. Wil-son. Separability of spatial manipulations inmulti-touch interfaces. In Proceedings of Graph-ics Interface 2009, pages 175–182. Canadian In-formation Processing Society, 2009.

[12] J. Reisman, P. Davidson, and J. Han. A screen-space formulation for 2d and 3d direct manipu-lation. In Proceedings of the 22nd annual ACMsymposium on User interface software and tech-nology, pages 69–78. ACM, 2009.

[13] T. Santos, A. Ferreira, F. Dias, and M. J. Fon-seca. Using sketches and retrieval to createlego models. EUROGRAPHICS Workshop onSketch-Based Interfaces and Modeling, 2008.

[14] K. Shoemake. Arcball: a user interface forspecifying three-dimensional orientation using amouse. In Graphics Interface, volume 92, pages151–156, 1992.

[15] A. Van Dam. Post-wimp user interfaces. Com-munications of the ACM, 40(2):63–67, 1997.

[16] A. Wilson. Simulating grasping behavior onan imaging interactive surface. In Proceedingsof the ACM International Conference on Inter-active Tabletops and Surfaces, pages 125–132.ACM, 2009.

[17] J. O. Wobbrock, M. R. Morris, and A. D. Wil-son. User-defined gestures for surface computing.In Proceedings of the 27th international confer-ence on Human factors in computing systems,CHI ’09, pages 1083–1092, New York, NY, USA,2009. ACM.

10