Mission Impossible Extended Room Chi Format

download Mission Impossible Extended Room Chi Format

of 5

Transcript of Mission Impossible Extended Room Chi Format

  • 7/28/2019 Mission Impossible Extended Room Chi Format

    1/5

    Mission Impossible Extended Room

    Abstract

    Our goal was to recreate the Kremlin hallway scene

    from the movie Mission Impossible: Ghost Protocol. To

    accomplish this we used the big screen in our lab as a

    window to look into a 3D virtual extension of the lab.

    We created a 3D virtual space that mirrored the real lab

    as well as including a natural extension of the lab. The

    goal here is to use the big screen as a window into the

    virtual space to delude a user to think the lab isextended past the screen when they look at it. To

    accomplish this we would create a window into this

    virtual reality for the user to peer into on the x,y plane

    that would represent the big screen in our 3D virtual

    space. We can then map the users position in the

    virtual lab using a Kinect and using an avatar

    representation of the users position in the real lab.

    The image on screen is dynamically updated by drawing

    a line from the avatars head through the big screen to

    determine the camera position. This line goes through

    the screen towards the middle of the back wall of the

    extended lab to determine the camera rotation.

    Keywords

    Blended Reality, Avatar, Mission Impossible Ghost

    Protocol, Skeleton Tracking, Big screen, Microsoft

    Kinect

    ACM Classification Keywords

    H.5.2 [Information interfaces and presentation]: User

    Interfaces.

    Copyright is held by the author/owner(s).

    CISC425, March 25, 2013, Kingston, Ontario, Canada.

    Jordan van der Kroon

    05914732

    [email protected]

    Khurrum Abdul Mujeeb

    05801597

    [email protected]

    Damilare Awosanya

    10073810

    [email protected]

  • 7/28/2019 Mission Impossible Extended Room Chi Format

    2/5

    2

    Introduction

    In the movie, the representation of the hallwayprojected on the big screen was accurate enough to

    fool the security guard at the opposite end of the

    hallway into thinking he was looking down the length of

    the hallway normally. In the movie the screen uses

    eye tracking and a projector mounted on a moveable

    robotic arm to accomplish this. Once a second security

    guard enters in front of the screen it fails to lock onto a

    single user and the image projected on screen appears

    to be bouncing back and forth between the two

    perspectives.

    Our goal for this project was to show a naturalextension of the lab on screen that dynamically

    updated the perspective based on a single user s

    position in front of the screen. The image shown on

    screen would accurately match what a user would see if

    they were in the virtual 3D space we had created.

    Therefore we were required to build a 3D virtual space

    that would accurately represent the real lab and a

    users position inside it. The next step was to add an

    extension to this space where the big screen is

    normally. This extension is what would be shown on

    screen so as to delude the user that what they are

    looking at is a lab that is twice as big as it really is.

    Related Works

    Blended Reality

    To make the user feel like they are looking through the

    screen into the natural extension of the lab we wanted

    to use the experience of Blended Reality (BR). The

    idea of the big screen being a window of perspective

    into the virtual room is based on the research done by

    Huynh[6]. In their work of blended Reality they

    wanted to eliminate the mapping of physical input on a

    user interface to interactions in the virtual world. Toaccomplish this they use the screen as a window into

    their virtual world where the user can interact with the

    virtual objects without the use of a representation of

    the user or an avatar to do so. They accomplish this in

    their 3D model of theApple Yardby having apples fly

    out of the virtual world through the window that is their

    screen and have users hit them using a hand held

    wand.

    Our implementation is similar to this except for the fact

    that we have a representation of the real world in our

    virtual world. This representation is not in view to theuser because our window into the virtual lab sits on the

    xy plane where the representation of the real lab ends

    and our extended virtual lab begins. Our avatar that

    represents the user and their position in the lab is

    behind this window which is the only perspective the

    user has into the virtual world. The window through

    which the user can see into the virtual extended lab is

    updated based on the users position in front of the

    screen. The avatar mirrors the players movement and

    position around the real lab and the perspective shown

    on screen is dynamically updated to represent the

    users new position.

    Implementation

    3D Model

    We had access to the 3D model that was used as a blue

    print when building the lab. The model we received had

    no textures or lighting which caused everything in the

    model appeared black. This made it very difficult to

    identify key parts of the lab. A light source and white

    textures were added to most of the components that

  • 7/28/2019 Mission Impossible Extended Room Chi Format

    3/5

    3

    made up the lab to be able to correctly identify them.

    We were only concerned with a single room of the lab.We removed everything that was not incorporated in

    this room.

    Figure 1. Unmodified Model with white textures added

    Figure 2. Trimmed down model with textures added from

    High-Res photos taken from inside the HML. Big screen can be

    seen here in black.

    Our goal here was to create an extended version of our

    lab with the big screen to be the point where the real

    lab ends and the extended lab can be seen. To track

    the position of the user in the real lab we would have

    an avatar representation of the user to actually move

    around the real lab side of the 3D virtual world. With

    the position of this avatar in the real side of the virtual

    lab we would be able to show the perspective of the

    avatar looking at through the big screen.

    To create a natural extension of the lab we wanted to

    incorporate similar elements in the extension that were

    inside the real lab. To accomplish the natural feel we

    didnt want to simply show a mirror image of the lab on

    the other side of the screen. We did want to maintain

    the design by keeping a row of pods on the right sideand a large sprawling desk on the left. The 3D virtual

    lab we created is asymmetrical. The concept behind

    that came from the design on the windows. The

    window behind the big screen in the real world is

    actually a perfect cross while all the windows to the left

    (when looking outside of the lab) of it start skewing the

    cross to the left. The symmetrical point of the

    extended lab is the perfect cross behind the big screen.

    All of the windows to the right are skewing to the right

    in the same style as the real lab.

    Figure 3. Asymmetrical extended lab windows

    Figure 4. Birds eye view of asymmetrical extended lab(left)

    next to the real lab(right)

    Kinect Implementation

    In order to extrapolate real-world data to use as an

    input device to move around the extended Lab, the use

    of a single Kinect and the Kinect SDK 1.6 was

    implemented. Since we are using unity to render the

    extended lab model, the Kinect SDK alone did not

    suffice. In order to map data from the Kinect onto

    unity, we required a Kinect to unity wrapper, which was

  • 7/28/2019 Mission Impossible Extended Room Chi Format

    4/5

    4

    available in the form of Zifgu an open source Kinect-

    unity wrapper.

    Along with the ability to map data from the Kinect to

    unity, Zigfu provides useful sample scripts. The

    Blockman3rdPersonsample script was used as the

    bases for our project. The blockman container in the

    script simulates a user skeleton and by placing the

    blockman container onto the extended lab model we

    were able to simulate the location of the person inside

    the real lab. Once the blockman container was placed

    onto the extended lab with our specifications,

    transparency was applied to the container to provide a

    first-person view of the lab.

    Adjustments to the camera were made to ensure real-

    life movement was replicated onto the extended lab,

    the camera was moved from behind the head of the

    blockman container to in front of it as seen in Figure 6.

    Here the camera has a rooted z coordinate. Which

    means it is only free to move in the xy plane that

    represents where the big screen exists in the lab shown

    by the black line in figure 5-8. The camera is then

    targeted to always be facing the far back wall of the

    extended lab. A line is drawn from blockmans head

    through the xy plane that is the big screen all the way

    to the back wall of the extended lab. Camera position

    is determined by where that line that intersects the xy

    plane which is shown by the yellow ring in Figure 5 and

    Figure 7. Camera rotation is determined by following

    the line to the center of the back wall where the

    camera is always updated to point at.

    Figure 5. The yellow circle is where the camera is situated on

    the xy plane. The red line then follows the camera direction

    into the back wall.

    Figure 6. View of the camera positioned where the yellow

    circle was in figure 5 pointed at the back wall.

  • 7/28/2019 Mission Impossible Extended Room Chi Format

    5/5

    5

    Figure 7. Camera position indicated by yellow circle has

    changed due to blockmans orientation inside the real lab

    Figure 8. View of camera positioned on the yellow circle from

    figure 7.

    Citations1. Avishek Chatterjee, Suraj Jain, and Venu Madhav

    Govindu. 2012. A pipeline for building 3D modelsusing depth cameras. In Proceedings of the EighthIndian Conference on Computer Vision, Graphicsand Image Processing (ICVGIP '12). ACM, NewYork, NY, USA, , Article 38 , 8 pages.DOI=10.1145/2425333.2425371

    2. Kibum Kim, John Bolton, Audrey Girouard, JeremyCooperstock, and Roel Vertegaal. 2012.

    TeleHuman: effects of 3d perspective on gaze andpose estimation with a life-size cylindricaltelepresence pod. In Proceedings of the SIGCHIConference on Human Factors in ComputingSystems (CHI '12). ACM, New York, NY, USA,2531-2540. DOI=10.1145/2207676.2208640

    3. Dmitry Batenkov. 2010. Real-time detection withwebcam.XRDS 16, 4 (June 2010), 50-51.DOI=10.1145/1764848.1764861

    4. Ji F. Urbnek, Theodor Bal, Ji Barta, andJaroslav Prcha. 2011. Technology of computer-aided adaptive camouflage. In Proceedings of the

    2011 international conference on Computers andcomputing (ICCC'11), Vladimir Vasek, YuriyShmaliy, Denis Trcek, Nobuhiko P. Kobayashi, andRyszard S. Choras (Eds.). World Scientific andEngineering Academy and Society (WSEAS),Stevens Point, Wisconsin, USA, 81-87.

    5. Hao Du, Peter Henry, Xiaofeng Ren, Marvin Cheng,Dan B. Goldman, Steven M. Seitz, and Dieter Fox.2011. Interactive 3D modeling of indoorenvironments with a consumer depth camera.In Proceedings of the 13th international conferenceon Ubiquitous computing (UbiComp '11). ACM, NewYork, NY, USA, 75-84.

    DOI=10.1145/2030112.20301236. David F. Huynh, Yan Xu, and Shuo Wang. 2006.

    Exploring user experience in "blended reality":moving interactions out of the screen. In CHI '06Extended Abstracts on Human Factors inComputing Systems (CHI EA '06). ACM, New York,NY, USA, 893-898.DOI=10.1145/1125451.1125625

    7. Franco Zambonelli and Marco Mamei. 2002. TheCloak of Invisibility: Challenges andApplications. IEEE Pervasive Computing 1, 4(October 2002), 62-70.DOI=10.1109/MPRV.2002.1158280