Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example,...

6
Scaled Reality: Interfaces For Augmenting Information On Small- Scale Tangible Objects Abstract In this paper, we introduce the concept of Scaled Reality with a set of interfaces to distort, deform and resize our comprehensible spatial world to help us augment and visualize information on physical objects that cannot be accessed otherwise due to their size or orientation. We introduce three interfaces: a mirrored table top, a see through lens system and a L-shaped display to scale, view and manipulate object forms in novel ways. As a proof-of-concept application, we show augmentation and retrieval of information tagged onto tangibles with a relatively microscopic form factor. Author Keywords Scaled Reality; Tangible Interfaces; Small-Scale Tangibles; Tagging; Reflective Interfaces; See-Through Lens System; L-shaped Displays; Augmented Reality ACM Classification Keywords H.5.2 [Information Interfaces and Presentation]: User Interfaces Copyright is held by the author/owner(s). CHI 2013 Extended Abstracts, April 27–May 2, 2013, Paris, France. ACM 978-1-4503-1952-2/13/04. Austin S. Lee* MIT Media Lab 75 Amherst St, Cambridge, MA 02139 USA [email protected] Kshitij Marwah* MIT Media Lab 75 Amherst St, Cambridge, MA 02139 USA [email protected] *These authors contributed equally to this work Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France 985

Transcript of Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example,...

Page 1: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

Scaled Reality: Interfaces For Augmenting Information On Small-Scale Tangible Objects

Abstract In this paper, we introduce the concept of Scaled Reality with a set of interfaces to distort, deform and resize our comprehensible spatial world to help us augment and visualize information on physical objects that cannot be accessed otherwise due to their size or orientation. We introduce three interfaces: a mirrored table top, a see through lens system and a L-shaped display to scale, view and manipulate object forms in novel ways. As a proof-of-concept application, we show augmentation and retrieval of information tagged onto tangibles with a relatively microscopic form factor.

Author Keywords Scaled Reality; Tangible Interfaces; Small-Scale Tangibles; Tagging; Reflective Interfaces; See-Through Lens System; L-shaped Displays; Augmented Reality

ACM Classification Keywords H.5.2 [Information Interfaces and Presentation]: User Interfaces

Copyright is held by the author/owner(s).

CHI 2013 Extended Abstracts, April 27–May 2, 2013, Paris, France. ACM 978-1-4503-1952-2/13/04.

Austin S. Lee* MIT Media Lab 75 Amherst St, Cambridge, MA 02139 USA [email protected] Kshitij Marwah* MIT Media Lab 75 Amherst St, Cambridge, MA 02139 USA [email protected] *These authors contributed equally to this work

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

985

Page 2: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

Introduction In his seminal paper Steve Mann, describes the concept of “mediation”, a filtering operation applied to reality that in conjunction with a combining operation inserts overlays [1]. The field of Augmented Reality [2] implements this filtering operation by adding information on top of our perception. Recently, the other extreme of this spectrum, Diminished Reality has been explored as a way to subtract unwanted information from our surroundings giving us unobtrusive reality perception [3].

In between these two spectrums, we introduce the concept of Scaled Reality with a set of interfaces that selectively magnifies or contracts our reality to help us comprehend information that is inaccessible due to relative size, orientation or build of the object. To motivate the idea, let us take an example of a filtering operation such as tagging on relatively small industrial parts such as nuts or bolts. Tagging these parts is important for encoding information such as time-stamps, part numbers and other design attributes [4] but the process is difficult due to their small size and orientation. We envision by scanning three-dimensional models of such industrial parts in combination with Scaled Reality interfaces can help circumvent this problem by enabling a selective scaling operation on the bolt to enhance its form and shape. This allows comprehensible and uncluttered tagging and retrieval of information from the object, as shown in Figure 1.

A complementary approach can be seen on macroscopic objects such as buildings and monuments. These architectural spaces are incomprehensible to manipulate due to their large size and distance. Here, Scaled Reality interfaces such as see-through lens allow

a low dimensional projection of the reality on a display screen. As before we assume using real-time available three-dimensional maps and models for these spaces one can selectively re-size, deform and emphasize urban landscapes with relevant information. In recent works, ClayVision [5] has elaborately explored these interactions from a stand-point of providing real-time urban design experiences. We approach such interfaces as a general class of systems that allow re-scaling of reality to better comprehend, manipulate and augment information that otherwise is inaccessible due to extremities of their size or orientation.

Figure 1. (Right) Augmented Reality for adding information onto real objects, (Left) Diminished Reality for subtracting content, (Middle) Scaled Reality for resizing our inaccessible visual world to make it comprehensible for applications such as tagging.

The goal of this paper is two fold. First, we introduce two key attributes of any Scaled Reality system: Selection and Dimension Control. Secondly, we develop and compare three interface designs that implement these attributes: Table-Top Surface, See-Through Lens and a L-shaped display. Throughout the paper our application scenario is to consider tagging small scale or microscopic objects such as industrial parts or biological specimens. We assume that in combination with these interfaces we can scan three-dimensional models for these objects in real-time that can be leveraged for interactive and selective scaling.

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

986

Page 3: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

Related Work The prior work can be divided into three general parts. We do not focus on specific implementation designs but related general frameworks.

Mediated Reality Systems The first works in Augmented Reality can be traced back to Ivan Sutherland’s vision in [6]. In earlier works, a head mounted display system AR applications [7] have been considered. One of the first systems using see-through digital magnifying glass was implemented in NaviCam [8]. This concept has been implemented on mobile hardware with advances in technology and computing power [9]. Our work explores the scale dimension in mediated reality systems, where one needs to magnify or contract reality to better comprehend and augment information that is otherwise inaccessible.

Interfaces Combining Tangible Interfaces With AR In Tangible User Interface research, projected information is often used to augment information to the surroundings of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information is a digital form-simulation synchronized to the physical icons or tangible interface [10]. In our work, rather than working with shadows we use existing knowledge of three-dimensional models to augment a selectively scaled version of the object.

Interfaces Using Deform Reality With Tagging Tagging physical objects with RFID tags has been shown in [11] and [12]. Such interfaces work well for macro scale objects that can be annotated and retrieved but for small sized parts it is impossible to add physical tags without complicated re-fabrication efforts. ClayVision [5] can be seen as one of the first works that deform reality

to allow both tagging and emphasis on certain parts of the reality. Though positioned as allowing novel experiences in a deformable city we see it as a first implementation of a Scaled Reality interface where the landscape is scaled down and distorted for tractable annotation and emphasis.

Scaled Reality Attributes There are two key motivations of implementing a Scaled Reality based system. First, as explained before is to comprehend and interact with extremely large or small-scale objects that cannot be accessed otherwise. Apart from these extremities the second reason is to allow emphasis or de-emphasis on parts of our reality by scaling them up or down respectively. Here, it is important to differentiate from traditional zooming where the goal is a low-resolution uniform expansion of the scene generally on a two-dimensional surface. Scaled Reality on the other hand assumes real-time scanning to obtain three-dimensional models of objects to allow selective scaling and emphasis for every dimension. These two attributes, Selective Scaling and Dimension Control, form the basis of any Scaled Reality interface as discussed below.

Figure 2. Here, we show a L-shaped interface where user is pointing at an object for selection (bottom). On the top, we see the selected object with boundary highlights.

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

987

Page 4: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

Selective Scaling This attribute selects a given or part of the reality that has to be re-scaled or emphasized. The user defines the selection boundary by gestural or a touch based input in the object space as shown in figure 2.

Dimension Control After selecting a part of the scene or object, a Scaled Reality interface enables the ability to re-scale the selection individually or jointly in all three dimensions. An example of a gestural taxonomy for Dimension Control is shown in Figure 3. This selective scaling allows fine grain control over specific parts that need to be emphasized for applications such as tagging and annotations. For example, to annotate the tail of a nail amongst a set of nail and bolts, one first selects the given nail by the Selection Attribute. Then using Dimension Control in the x-dimension the tail is expanded to allow multiple tags.

Scaled Reality Interfaces Any Scaled Reality interface implements the two attributes as defined above. These attributes can be performed either using natural gestures or touch based interactions. Here we discuss and analyze three possible implementations of Scaled Reality interfaces: Table-Top Mirror Interface, See-Through Lens System and L-Shaped Display. It should be noted here that in all interfaces we assume the existence of three-dimensional model that is determined in real-time by scanning the object or the scene. This model combined with textural attributes is then used to modify and display scaled versions on bottom, top or besides the original objects.

Figure 3. Example gesture taxonomy for scaling along each of the dimensions of the selected object. The top right image also shows scaling by combining dimension control along both x and y-axis.

Figure 4. A table-top interface to demonstrate tagging applications. As seen the reflection is flipped and scaled to tag parts of objects that are otherwise inaccessible.

Table-Top Mirror Interface A table-top interface consists of a display screen mounted horizontally on a table on which micro to modest size objects can be placed. The display screen shows the

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

988

Page 5: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

flipped reflection of the object constructed from its textured three-dimensional model. The interaction surface in this design is the table top that is used to record touch based gestures for selection and dimension control as shown in Figure 4. This interface is extremely useful for micro and small-scale objects such as bolts, nails and biological specimens such as insects that need to be scaled up but not as much for large spaces such as building or architectures that need to be scaled down.

Figure 5. A see-through transparent display to scale the object using natural gestures. The scaled version is augmented on top of the physical object as seen through the screen.

Figure 6. A L-shaped display with the object placed on the horizontal surface. A camera is used to record gestural interactions for selection and dimension control. The corresponding scaled version is then displayed on the vertical screen.

See-Through Lens System This interface consists either of two implementations: a transparent display screen or a combination of a camera and a held display through which micro and small scale objects can be seen and manipulated once scaled up to the size beyond the frame of the device. Also, the interface is useful for scaling down a scene such as buildings. The user interaction for both selection and dimension control in this implementation is purely gestural as shown in the figure 5.

L-Shaped Display The third interface design consists of a horizontal object space with a vertical display screen to view its selectively scaled version. A camera system is used to record both the object texture and gestural user interactions that are then parsed to implement selection and dimension control. This system demonstrates the efficacy of having the tangible object in-place as it acts as a handle to allow operations such as rotating and translation.

Preliminary User Study We conceptualized the three Scaled Reality interfaces and conducted a preliminary user survey to understand the most convincing interface for Scaled Reality. Our research with 6 participants showed that each of the interfaces have specific features that are useful. For example, two-thirds of the participants believed the Table-Top Mirror Interface as the most convincing interface as it showed direct relationship at the point of contact with the object and it’s scaled up representation. The rest one-third found the See-Through Lens System better for extremely scaled up contents. They envisioned it as being implemented using wearable augmented reality interfaces such as

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

989

Page 6: Scaled Reality: Interfaces For Augmenting Information On ... · of the tangible input. For example, Digital Shadow is a notable AR system for TUI. In URP, the augmented information

glasses. All of them also felt that such an interface is good for high precision tasks as one does not move object after it has been placed. A feature of the L-shaped display that stood out for one third of the participants was the explicit decoupling between the physical and virtual space to allow uncluttered physical space when interacting with multiple objects.

Discussion And Future Work In this paper, we introduced the concept of Scaled Reality to inspire design and implementation of interfaces for interacting with reality that is too small to comprehend. Such a paradigm and interface is tremendously useful in applications such tagging microscopic objects such as biological specimens or industrial products in diverse sizes. It also allows for emphasizing or de-emphasizing certain objects in a collection by scaling them up or down, or analyzing their specific parts that are otherwise inaccessible. Going forward, based on our preliminary user survey we would like to implement the three interfaces as discussed here.

References [1] Mann, S. Mediated Reality. Linux J. 59es, Article 5 (March 1999).

[2] Azuma R., Baillot Y., Feiner S., Julier S., MacIntyre B.Recent Advances In Augmented Reality. IEEE, Computer Graphics And Applications.

[3] DeVincenzi, A., Yao, L., Ishii, H., and Raskar, R. Kinected conference: augmenting video imaging with calibrated depth and audio. In Proceedings of the ACM

2011 conference on Computer supported cooperative work (CSCW '11). ACM, New York, NY, USA, 621-624. DOI=10.1145/1958824.1958929.

[4] Regenbrecht H. Industrial Augmented Reality Applications. Chapter XIV Industrial Augmented Realities Applications 2006.

[5] Takeuchi Y., Perlin K. ClayVision: The (Elastic) Image Of The City. In Proc. CHI 2012, ACM Press (2012).

[6] Sutherland, I. The Ultimate Display. In Proc. of IFIP Congress (1965).

[7] Feiner, S., MacIntyre, B., Hollerer, T. A Touring Machine: Prototyping 3D Mobile Augmented Reality Systems for Exploring the Urban Environment. In Proc. Of ISWC (1997). [8] Rekimoto J. The Magnifying Glass Approach to

[9] Augmented Reality Systems. In International Conference on Artificial Reality and TeleExistence (1995).

[10] Underkoffler, J. and Ishii, H. URP: A Luminous Tangible Workbench For Urban Planning And Design. In Proc. of the SIGCHI Conference On Human Factors In Computing Systems (1999).

[11] Valkkynen P., Korhonen I., Plomp J., Tumisto T., Cluitmans L., Ailisto H. and Seppa, H. A User-Interaciton Paradigm For Physical Browsing And Near- Object Control Based on Tags. In Physical Interaction- Workshop on Real World User Interface (2003).

[12] Lee J., Seo D., Song B., Gadh R. Visual and Tangible Interactions with Physical and Virtual Objects Using Context-aware RFID. In Expert Systems with Applications (2010)

Work-in-Progress: Augmented Reality CHI 2013: Changing Perspectives, Paris, France

990