Multi Touch Table

29
By the name of Allah THE ISLAMIC UNIVERSITY FACULTY OF ENGINEERING COMPUTER ENGINEERING DEPARTMENT Final Work Summarization Graduation Project-Part 1 Multi Touch Table BY Wafaa' Audah Haneen El-masry Nisma Hamdouna Maysaa El-Safadi SUPERVISOR Eng. Wazen Shbair Gaza, Palestine June. 4 th , 2012

Transcript of Multi Touch Table

Page 1: Multi Touch Table

By the name of Allah

THE ISLAMIC UNIVERSITY – FACULTY OF ENGINEERING

COMPUTER ENGINEERING DEPARTMENT

Final Work Summarization

Graduation Project-Part 1

Multi Touch Table

BY

Wafaa' Audah Haneen El-masry

Nisma Hamdouna Maysaa El-Safadi

SUPERVISOR

Eng. Wazen Shbair

Gaza, Palestine

June. 4th

, 2012

Page 2: Multi Touch Table

Dedication

We dedicate this work To our supervisor Eng. Wazen Shbair… To our university …

Page 3: Multi Touch Table

Acknowledgment

We wish to express our sincere appreciation and thanks to Eng. Wazen Shbair

who gave us the chance to search and to develop our knowledge in the project

topic beside the possession of supervision wisdom through the semester .

Page 4: Multi Touch Table

TABLE OF CONTENTS

Dedication ……………………………………………………………..……...……. I

Acknowledgment ……..………………………………………………..….……… II

Table of contents…………..……….. ….……... …..……………….………….. III

Chapter 1: Introduction To Multi Touch Surface

1-1 Introduction………………………...………………………..…. 01

1-2 Touch Screen Techniques...………...………………………..…. 01

1-3 Infrared Multi-Touch Table……………………........................... 02

1-4 Technical Aspects/Features……................................................... 04

Chapter 2: Project Components

2-1 Hardware.....……………………………………………………… 05

2-2 Software.....…………………………………………………….… 06

Chapter 3: Software & Hardware Requirements Specification

3-1 About Project …………………………………..………..…...... 06

3-2 Software Requirements …....………………...…………..…..... 09

3-3 Hardware Requirements …....………………...…………..…..... 11

3-4 Detailed description about components……...…………..…..... 12

Chapter 4: CCV Details

4-1 About CCV ..…... ………...……….………...…………..…..... 13

4-2 Community Core Vision (CCV) – Calibration…..……………… 15

Chapter 5: Multi-Touch Hello World

5-1 Application using the Virtual Studio with C++.……..…........... 18

5-2 The changes in the Window1.xaml . ……….…………………… 19

Chapter 6: Creatives.…………………………..………………………….……. 27

Page 5: Multi Touch Table

Chapter 7: Conclusion ………………………………………………………….29

Page 6: Multi Touch Table

CHAPTER 1

INTRODUCTION TO MULTI TOUCH SURFACES

1.1 Introduction

Multi-touch denotes a set of interaction techniques that allow computer users to control

graphical applications with several fingers. Multi-touch devices consist of a touch screen

(e.g., computer display, table, wall) or touchpad, as well as software that recognizes multiple

simultaneous touch points, as opposed to the standard touch screen (e.g. computer touchpad,

ATM), which recognizes only one touch point.

Multi-touch surfaces allow for a device to recognize two or more simultaneous touches by

more than one user. Some have the ability to recognize objects by distinguishing between the

differences in pressure and temperature of what is placed on the surface. Depending on the

size and applications installed in the surface, two or more people can be doing different or

independent applications on the device. Multi-touch computing is the direct manipulation of

virtual objects, pages, and images allowing you to swipe, pinch, grab, rotate, type, and

command them eliminating the need for a keyboard and a mouse. Everything can be done

with our finger tips.

Multi-Touch Tables are currently being used in Real Estate, Corporate, Industrial, Trade

Show, Retail and Home Sales Centers. Screen Solutions offers a number of standard and

customized touch table programs and options for these and other markets.

1.2 Touch Screen Techniques

Touch screens rely on different phenomena to perform their functions, ranging from electrical

current to infrared light to sound waves.

Resistive vs. Capacitive

A resistive touch screen sandwiches several thin, transparent layers of material over an LCD

or CRT. The bottom layer transmits a small electrical current along an X and Y path. Sensors

monitor these voltage streams, waiting for an interruption. When the flexible top layer is

pressed, the two layers connect to form a new circuit. Sensors measure the change in voltage,

triangulating the position to X and Y coordinates. Resistive touch screens work with any kind

of input, including a stylus or finger Capacitive screens move the electrical layer to the top of

the display. A minimal current is broadcast and measured from the corners of the monitor.

When a person touches the screen, a small amount of the current is drawn away by the body’s

natural capacitance. The sensors measure the relative loss of the current and a microcontroller

triangulates the point where the finger made contact.

Surface Acoustic Wave

Surface acoustic wave (SAW) screens use beams of ultrasonic waves to form a grid over the

surface of a display. Sensors along the X and Y axes monitor the waves; when one is broken,

the X and Y points are combined to identify the touch coordinate. SAW screens, like their

capacitive counterparts, are durable and provide a clear line of sight to the display image, but

the former work with any kind of input, be it a fingertip, a fingernail, or a stylus. On the other

Page 7: Multi Touch Table

hand, they’re more susceptible to interference from dirt and other foreign objects that

accumulate on the screen, registering surface contaminants as points of contact.

Infrared and Infrared Imaging

Infrared touch screens are similar to SAW screens in that they use a ring of sensors and

receivers to form an X/Y grid over a display. But instead of sending electrical current or

sound waves across this grid, infrared LEDs shoot invisible beams over the surface of the

display. The microcontroller simply calculates which X and Y lines were broken to determine

the point of input. These screens work with a stylus, finger, or other pointer and give an

unobstructed view of the display. They’re also durable because the point of input is registered

just above the glass screen; only incidental contact is needed. Military applications often use

infrared screens because of the product’s longevity. Infrared imaging touch screens are vastly

different from touch screens that use traditional infrared input. IR imaging screens use two or

more embedded cameras to visually monitor the screen’s surface. IR beams are transmitted

away from the cameras, illuminating the outside layer of the display. When the beams are

disrupted by a fingertip or a stylus, the cameras measure the angle of the object’s shadow and

its distance from the camera to triangulate the disruption.

While the technologies may differ, we look forward to touch screens filling up walls and

tables in our homes and offices. At that point, simple, direct interaction will beat traditional

input methods. Who wants to carry a mouse around the house when a personal touch will do?

1.3 Infrared Multi-Touch Table

There are many techniques to make touch table with infrared light.

Frustrated Total Internal Reflection (FTIR)

Infrared light is shined into the side of an acrylic panel (most often by shinning IR LEDs on

the sides of the acrylic). The light is trapped inside the acrylic by internal reflection. When a

finger touches the acrylic surface this light is “frustrated” causing the light to scatter

downwards where it is picked up by an infrared camera. A silicone rubber layer is often used

as a “compliant surface” to help improve dragging and sensitivity of the device. When

touching bare acrylic, one must press hard or have oily fingers in order to set off the FTIR

effect. With a complaint surface (like silicone rubber) the sensitivity is greatly improved.

Page 8: Multi Touch Table

Diffused Illumination (DI)

Diffused Illumination comes in two main forms. Front Diffused Illumination and Rear

Diffused Illumination. Both techniques use the same basic principles.

1- Front DI

Infrared light (often from the ambient surroundings) is shined at the screen from above the

touch surface. A diffuser is placed on top or on bottom of the touch surface. When an object

touches the surface, a shadow is created in the position of the object. The camera senses this

shadow.

2- Rear DI

Infrared light is shined at the screen from below the touch surface. A diffuser is placed on top

or on bottom of the touch surface. When an object touches the surface it reflects more light

than the diffuser or objects in the background; the extra light is sensed by a camera.

Depending on the diffuser, this method can also detect hover and objects placed on the

surface.

Comparisons between previous technologies

Page 9: Multi Touch Table

1.4 Technical Aspects/Features

These all have the same basic framework using cameras to sense objects, hand gestures, and

touch. The user input is then processed and displayed on the surface using rear projection. The

following is a diagram of the Microsoft Surface (Figure B) and an explanation of the parts.

1- Screen: The Surface has an acrylic tabletop which a diffuser makes capable of

processing multiple inputs from multiple users. Objects can also be recognized by

their shapes or reading coded tags.

2- Infrared: Infrared light is projected onto the underside of the diffuser. Objects or

fingers are visible through the diffuser by series of infrared-sensitive cameras which

are positioned underneath the surface of the tabletop.

3- CPU – This is similar to a regular desktop. The underlying operating system is a

modified version of Microsoft Vista.

4- Projector – The Surface uses the same DLP light engine in many rear-projection tvs.

Page 10: Multi Touch Table

CHAPTER 2

PROJECT COMPONENTS

2.1 Hardware

Making the Table

Construct an engineering drawing of a table deciding the location of each element in it. Also

decide the angle of inclinations for mirrors cams etc beforehand. Try to make the mirror and

the projection angle adjustable.

Construct the table according to the specified dimensions and angles and test it by keeping

your equipments in place. Make sure that in your design the table does not wobble much and

touch surface is at sufficient height.

Page 11: Multi Touch Table

Modifying the camera

Open the camera to remove the infrared filter over it. Replace it by a piece of darkened

photographic film. This film works as a filter that blocks all the visible light and only allows

the visible infrared to pass through it. The photographic image seen should look gray and

white when seen through software that reads camera. You’d see that every object gets

illuminated and the picture is very clearly visible but that’s because infrared from sun falls on

every object on which visible light falls. So be sure that the filter is working.

The Infrared Plane

Set up the Lasers at four corners of the table. Add the 89o line generators to them and test if

the blob of hand is detected by the webcam on touching the surface.

2.2 Software

For this we advise you to run the installation of all the software in “run as administrator”

mode. Install Windows SDK and Quick time player on the computer. You should have visual

C++ redistributable 2005 installed on the PC. Extract the zip files of CCV 1.4 and Multitouch

Vista. In the folder in which u extracted the multi touch vista zip file, you will find the folder

called driver. Install the driver depending on which system you are using by clicking on the

install driver.cmd. Once the driver is installed, go to device manager and check if u have a

device called Universal software HID Device under the Human Interface Devices option.

Disable and re enable that device. Open CCV and configure it using the various functions.

Save the config.xml file and launch tbeta to see the new settings. After CCV has been

configured, go to the folder of multi-touch vista. First run multitouch service.console.exe.

Then microsoft.multitouch.configuration.WPF. Select the TUIO tab and click the arrow

button. Now open multitouch.driver.console.exe After this your multitouch computer is ready

for use. For preinstalled games and apps download Microsoft touch pack from the Microsoft’s

download centre and enjoy your first multitouch table.

CHAPTER 3

Software & Hardware Requirements Specification

3.1 About Project

• Today’s computers allow you to have multiple applications in multiple windows but they

probably only have one keyboard and mouse which means only one person can operate at a

time. These Surfaces engage the senses, improve collaboration, and empower the students by

having everything available to them at their finger tips.

• Interactive Classrooms: The multi-touch surface computers will encourage the students to

interact with content and each other promoting group work and team building skills.

• Students would have custom built hardware where they can create their assignments and

teachers may be able to see it instantly and help the students.

• Students sitting around the table may open a file, push it across, drag it, modify it, let

another student add or delete information and then save the document.

Page 12: Multi Touch Table

• In a photography class, the students could share their images instantly.

• In an art class, one student could be painting with a paint brush while another is drawing

with her finger. Both the paint brush and the finger would be recognized.

• In a geography class each student could find a specific location and the maps could be

displayed instantly.

• Teachers would not have to worry about finding space in a computer lab in order for the

students to create projects or conduct research.

• Students could share podcasts or other information related to a certain project that they have

saved to their flash drive just by laying the device on the surface.

Character recognition application will be made in order to give the children the ability

to write the letters in Arabic and English in order to be checked if they are true or not beside

the ability to the viewing of pictures that represent the letter at different places in the word

with related photos. Beside the ability to read the character to the student.

System Features

The administration of a classroom can be improved by reducing the amount of time a

teacher spends fulfilling paperwork requirements alone, such as test taking and

scoring.

The tests could be included in each student’s desktop and automatically recorded and

scored.

The teacher's desktop could have the ability to look at each student's desktop from

their desk and take control if necessary. This can be used to help a student having

trouble or to verify that the student is staying on task.

Also, teachers would have the ability to send presentations to any or all desktop

eliminating the need for print outs and copies.

If a problem occurred on one Surface, that student could move to another student’s

desk and work along with them until theirs was fixed.

By engaging the students and combining both the audio and visual aspects in every

lesson plan, we have a better chance of reaching every student and increasing the

percentage of information retained.

Students will be able to work in groups at one desktop Surface. This would make the

construction of projects easier. Also, students will be able to work on class

assignments together or help each other and sometimes students are able to learn and

understand better when the information is delivered or reiterated from their peers in a

more creative fashion.

Page 13: Multi Touch Table

3.2 Software Requirements

- CCV Software

For full software implementation of multi-touch table, we need to install community core

vision (CCV 1.4) program. CCV is the main program, which links computer by table (by

camera), receives information from the camera and through special filters, it analyzes images

and then locates the touch. This program requires a strong hardware, because it is a real time

program.

Figure 1 shows us the main window of CCV, the following is a description for each part in

the figure:

1- This window shows us what the camera see (without any filtration or change).

2- The same image from the first window, but after several filters. You must ensure that

each finger touch represented with a white point called (Blob) without impurities.

3- Directly after running the program, press on it without touching the screen. This reset

the screen before starting of filtration.

4- Max Blobs Size and Min Blobs Size determine the size of the white point (the place

that you touch). It is very important that the numbers are reasonable so that the

program can recognize the presence of touch points.

5- Points calibration, a process that will set the X, Y on the screen, the settings stored in a

special file.

6- Connection technique with the programs:

TUIO TCP: it is dedicated to programs that have been programmed by flash

technology that is special for touch.

TUIO OSC: it is dedicated to run any program that hasn’t been programmed by

flash (when you want to transfer windows control to touch, choose this type).

7- The final stage, save the settings to a file, the next time you run the program, it adjusts

the settings from the file.

Page 14: Multi Touch Table

Figure 1: CCV program

- Multi-Touch Vista

If you have Windows Vista or Windows 7 and you need to convert the control of your

windows to touch instead of mouse to test multi-touch applications; you need to install Multi-

Touch Vista second release.

Multi-Touch Vista is a user input management layer that handles input from various devices

(touchlib, multiple mice, TUIO etc.) and normalizes it against the scale and rotation of the

target window.

to install Multi-Touch Vista on windows 7, follow the following steps:

1- First go to Multi-Touch Vista Download Section and download the Recommended

Download. The current file as of 15/04/2009 is "MultiTouchVista - second release -

refresh 2.zip".

2- Extract the zip file to a folder easily accessible.

3- Go into the "Driver" folder and then select "x32" if you are using 32 bit Windows

(don't go into it, just select it).

4- Make sure "x32" is already selected, then press "Shift" while right clicking on the

"x32" folder. Select the "Open command window here".

5- In the command prompt, press "tab" a few times until you see "Install driver.cmd" and

press "Enter". Answer "Yes" for User Account Control.

Page 15: Multi Touch Table

6- Now you can close the command window and go to "Control Panel" and then "Device

Manager".

7- Expand the Human Interface Devices. Right click on "Universal Software HID

device" and select "Disable", answer "Yes" for the prompt. Then "Enable" it again.

This actually did a reload and the driver should already start working. To confirm that,

go to "Control Panel" and then "System" to check that "Pen and Touch:" is "Touch

Input Available with 255 Touch Points".

8- Then proceed with either one of the following:

a- To run Multi-touch application created with Multi-Touch Vista Framework, go to

the Multi-Touch Vista folder extracted earlier, find

"Multitouch.Service.Console.exe" and double click to run it. The default input is

already set to MultipleMice, so you can see red dot moving together with the

mouse but it will not be at the same location at the mouse cursor. You still have to

use the regular mouse cursor to interact with the windows as the red dot will only

interact with applications created using Multi-Touch Vista Framework. Whenever

you are adding mouse or removing mouse, you have to restart

"Multitouch.Service.Console.exe".

b- To test Multi-touch features in Windows 7, first go to the Multi-Touch Vista folder

extracted earlier, find "Multitouch.Service.Console.exe" and double click to run

it. You should see red dot corresponding to the mouse cursor (probably not at the

same location). Now go to the same folder (use the regular mouse cursor, the red

dot doesn't interact with Windows yet) and find

"Multitouch.Driver.Console.exe", double click and run it. Now Multi-touch

driver should be running, but the original mouse cursor still dominating. Now go

to the same folder and find "Multitouch.Configuration.WPF.exe", double click

and run it. Click on "Configure device", tick on the empty box for "Block native

windows mouse input....". and press "Ok". Now the red dot can finally interact

with the Windows. To stop it (sometimes mouse interaction totally gone after

testing for a long time), use "alt-tab" to reach the two command windows and

press "Enter" to end them.

Some of the Windows 7 multi-touch features to test on is:

1- Paint

2- Internet Explorer or Firefox Browser (Zoom in and zoom out).

3- Activate the software keyboard on the left edge of the Windows and type using it.

- CL Eye Platform Driver

CL Eye Platform Driver provides users a signed hardware driver which exposes Sony

Playstation™ 3 Eye camera to third party applications such as Adobe Flash, Skype, MSN or

Yahoo for video chat or conferencing. It provides a full control of camera such as resolution,

exposure and gain configurations. Also, CL Eye Platform Driver needed to define Sony

Playstation™ 3 Eye camera to run with Windows.

Page 16: Multi Touch Table

3.3 Hardware Requirements

- Introduction to Hardware

Multi-touch denotes a set of interaction techniques that allow computer users to control

graphical applications with several fingers. Multi-touch devices consist of a touch screen

(table) and other components as well as software that recognizes multiple simultaneous touch

points. At the moment there are five major techniques that are allowed for the creation of a

stable multi-touch hardware systems; these include: Frustrated Total Internal Reflection

(FTIR), Rear Diffused Illumination (Rear DI) such as Microsoft’s Surface Table, Laser Light

Plan (LLP), LED-Light Plane (LED-LP), and finally Diffused Surface Illumination (DSI).

Optical or light sensing (camera) based solutions make up a large percentage of multi-touch

devices. The scalability, low cost and ease of setup are suggestive reasoning for the popularity

of optical solutions. Each of the previous techniques consist of an optical sensor (typically a

camera), infrared light source, and visual feedback in the form of projection or LCD. Prior to

learning about the technique, it is important to understand these parts that all optical

techniques proximately share.

1. Woody or metal table or coffer.

2. Piece of glass.

3. Diffuser.

3. Projector

4. Web camera

5. Piece of Mirror

6. IR LEDs

- The details of these components will be shown later.

3.4 Detailed description about components:

- Woody or metal table or coffer: used to contain all the components inside it, and the

upper surface is for touch. The size of table is bounded to the size of wanted screen.

- The table that we need will be at these parameters, height 80cm, width 60cm, length 80cm. screen size 30 inch. Many holes must be opened to enable access to the internal components and to make a holes for the heat of the projector.

- Piece of glass which represent the surface (screen), this must be done by the glass or

of Plexiglass material. The thickness of the surface from 3-5 cm.

- Diffuser which is the upper surface of the table, this layer capture the photo from the projector beside the avoidance of the outer light effects on the camera. This layer can be made by the cheep white nylon.

- Projector is used to transfer the picture to show it on the upper surface of the table, the quality of the projector affects the quality of the picture appearance.

- Mirror is used to increase the distance between the surface and the projector in order to have more size for the screen.

- Infrared LEDs used to send infrared x-rays towards the surface, every touch tip of the fingers reflect the rays towards the bottom (the exact touched point). The reflected

Page 17: Multi Touch Table

rays captured by the camera and sent to the CPU. Four IR LEDs are needed with 48 LEDs.

- Camera is used to capture the infrared rays that reflected when touching the surface, then it send the picture to the CPU. The needed camera must have high rate and high resolution, so that a lot of pictures can be made in one second. Sony camera will be used that have the name PS3Eye that give 60 picture at the second with accuracy of 640/480. This camera have to be with attached to the computer with special driver.

- Every camera has a filter that avoid the infrared rays to reach the sensor, we are need-in this project-to enable only the infrared rays to be identified, so we must remove the filter from the camera and add a negative piece that is used to avoid the natural brightness to reach the camera.

- CPU for the connection with the table and making the applications.

CHAPTER 4

CCV DETAILS

4.1 About CCV

- Community Core Vision, CCV for short, is a open source/cross-platform solution

for computer vision and machine sensing. It takes an video input stream and outputs

tracking data (e.g. coordinates and blob size) and events that are used in building

multi-touch applications. The coordinate positions are found in port 3333 of the

computer; we know the coordinate positions can be input into Java. CCV can interface

with various web cameras and video devices as well as connect to various

TUIO/OSC/XML enabled applications and supports many multi-touch lighting

techniques including: FTIR, DI, DSI, and LLP with expansion planned for the future

vision applications (custom modules/filters).

- CCV outputs in three formats (XML, TUIO and Binary) over network sockets and

has an internal C++ event system.

-To get working with Surface your best bet is the MT Vista project as it will take

TUIO input and dispatch WM_Touch events.

Page 18: Multi Touch Table

1. Source image - Displays the raw video image from either camera or video file.

2. Use Camera Toggle - Sets the input source to camera and grabs frames from

selected camera.

3. Use Video Toggle - Sets the input source to video and grabs frames from video file.

4. Previous Camera Button - Gets the previous camera device attached to computer

if more than one is attached.

5. Next Camera Button - Gets the next camera device attached to computer if more

than one is attached.

6. Tracked Image - Displays the final image after image filtering that is used for blob

detection and tracking.

7. Inverse - Track black blobs instead white blobs.

8. Threshold Slider - Adjusts the level of acceptable tracked pixels. The higher the

option is, the bigger the blobs have to be converted in tracked blobs.

9. Movement filtering - Adjust the level of acceptable distance (in pixels) before a

movement of a blob is detected. The higher the option is, the more you have to

actually move your finger for CCV to register a blob movement.

10. Min Blob Size - Adjust the level of acceptable minimum blob size. The higher the

option is, the bigger a blob has to be to be assigned an ID.

11. Max Blob Size - Adjust the level of acceptable maximum blob size. The higher

the option is, the bigger a blob can be before losing its ID.

12. Remove Background Button - Captures the current source image frame and uses

it as the static background image to be subtracted from the current active frame. Press

this button to recapture a static background image

13. Dynamic Subtract Toggle - Dynamically adjusts the background image. Turn this

on if the environmental lighting changes often or false blobs keep appearing due to

environmental changes. The slider will determine how fast the background will be

learned..

14. Smooth Slider - Smoothes the image and filters out noise (random specs) from the

Page 19: Multi Touch Table

image.

15. Highpass Blur Slider - Removes the blurry parts of the image and leaves the

sharper brighter parts.

16. Highpass Noise - Filters out the noise (random specs) from the image after

applying Highpass Blur.

17. Amplify Slider - Brightens weak pixels. If blobs are weak, this can be used to

make them stronger.

18. On/Off Toggle - Used on each filters, this is used to turn each filter on or off.

19. Camera Settings Button - Opens a camera settings box. This will open more

specific controls of the camera, especially when using a PS3 Eye camera.

20. Flip Vertical Toggle - Flips the source image vertically.

21. Flip Horizontal Toggle - Flips the source image horizontally.

22. GPU Mode Toggle - Turns on hardware acceleration and uses the GPU. This is

best used on newer graphics cards only. Note: GPU mode is still in early development

and may not work on all machines.

23. Send UDP Toggle - Turns on the sending of TUIO messages.

24. Flash XML - Turns on the sending of Flash XML messages (no need for flosc

anymore).

25. Binary TCP - Turns on the sending of RAW messages (x,y coordinates).

26. Enter Calibration - Loads the calibration screen.

27. Save Settings - Saves all the current settings into the XML settings file.

4.2 Community Core Vision (CCV) – Calibration

In order to calibrate CCV for your camera and projector/LCD, you’ll need to run the

calibration process. Calibrating allows touch points from the camera to line up with

elements on screen. This way, when touching something displayed on screen, the

touch is registered in the correct place. In order to do this, CCV has to translate

camera space into screen space; this is done by touching individual calibration points.

Follow the directions below to setup and perform calibration.

note: For those displaying an image on the touch surface (projector or LCD) , you’ll

need to set up your computer so that the main monitor is the video projector so that

CCV is displayed on the touch surface.

Calibration Instructions

1. Press the enter calibration button or “c” to enter the calibration screen.

A grid of green green crosses will be displayed. These crosses are calibration

points you touch once we begin calibrating (step 4).

There is a white bounding box that surrounds the calibration points. If a visual

image is not being displayed on the touch surface (MTmini users), skip to step

3; otherwise, continue.

Page 20: Multi Touch Table

2. (MTmini users skip this step) If the white bounding box is not fully visible or

aligned with the touch surface, follow the directions under Aligning Bounding Box to

Projection Screen displayed on the CCV screen to align the bounding box and

calibration points so they fit the touch surface. The goal is to match the white

bounding box to the left, right, top, and bottom of your screen.

Aligning Bounding Box to Projection Screen:

o Press and hold “w” to move the top side, “a” to move left side, “s” to

move bottom side, and “d” to move right side.

o While holding the above key, use the arrow keys to move the side in

the arrowed direction.

In other words, Hold “up arrow”, then “left arrow” on your keyboard to get the

upper corner at the top left corner of your screen, then hold “s + down arrow”,

then “d + right arrow” to get the bottom right corner in position. Up, down,

right, left arrows will make the box move, and a combination of up, down,

right, left and z, q, s, d or equivalent on qwerty keyboard will make the edges

move.

Page 21: Multi Touch Table

3. If using a wide angle camera lens or need higher touch accuracy, more calibration

points can be added by following the Changing Grid Size directions on screen. note:

adding additional calibration points will not affect performance.

To Change Grid Size:

o Press “+” to add points or “-” to remove points along the x-axis.

o Hold “shift” with the above to add or remove points along the y-axis.

If this does not work, you may want to try “_”, and “+/-” from

the numerical pad.

4. Begin calibration by pressing “c.”

5. A red circle will highlight over the current calibration touch point. Follow the

directions on screen and press each point until all targets are pressed.

If not projecting an image on the touch surface (MTmini users), you may guess

or draw the touch points directly on the touch surface so you know where to

press.

If a mistake is made, press “r” to return to the previous touch point. If there

are false blobs and the circle skips without you touching, press “b” to

recapture the background and return “r” to the previous point.

Page 22: Multi Touch Table

6. After all circles have been touched, the calibration screen will return and accuracy

may be tested by pressing on the touch area. If calibration is inaccurate, calibrate again

(Step 4) or return to the main configuration window “x” to adjust the filters for better

blob detection.

Page 23: Multi Touch Table

CHAPTER 4

MULTI-TOUCH HELLO WORLD

4.1 Application using the Virtual Studio with C++

Multitouch "Hello World" program:

Start Visual Studio and create a new WPF project and name it

"MultitouchHelloWorld".

Add reference to Multitouch.Framework.WPF.dll assembly.

Page 24: Multi Touch Table

4.2 The changes in the Window1.xaml .

In Window1.xaml add a new namespace:

xmlns:mt="http://schemas.multitouch.com/Multitouch/2008/04".

and change root object <Window> to <mt:Window>. Don't forget to close it with

</mt:Window>.

Now add a TouchablePanel as a child of Grid and TextBlock as a child of

TouchablePanel.

Set the Text property of TextBlock to "Hello World!!!".

Set FontSize property to 72, FontWeight to Bold and make Foreground White and

Background LightBlue.

The changes in the Window1.cs.

open Window1.cs and change Window1 base object from Window to

Multitouch.Framework.WPF.Controls.Window_.

Page 25: Multi Touch Table

Before you start it, start multitouch service. For example by executing

Multitouch.Service.Console.exe.

Now hit F5 to start the program. Your program will look like this and you can touch

the text and move and rotate it around.

Multitouch "Photo Album " program

Replace TouchablePanel with ItemsControl

set its ItemsPanel to TouchablePanel.

Now bind ItemsSource to Photos property with {Binding Photos}.

Open Window1.cs file.

add a new DependencyProperty Photos of type ObservableCollection<string>.

In constructor, before InitializeComponent is executed, set DataContext to this. Now

override OnInitialized_ method and add this code:

foreach(string file in Directory.GetFiles(Environment.GetFolderPath(Environment.SpecialFolder.MyPictures), "*.jpg").Take(5)) { Photos. Add(file); }

Page 26: Multi Touch Table

It will take 5 pictures from you Pictures folder, so make sure you have them.

Finally let's get back to Window1.xaml and add a DataTemplate to display strings as

Images.

Now start the program and enjoy your Multitouch Photo Album.

Page 27: Multi Touch Table

CHAPTER 5

CREATIVES

The following description is about the Multi Touch Table-TimeTable.

Page 28: Multi Touch Table

The following description is about the Multi Touch Table-Detailed Budget.

Page 29: Multi Touch Table

CHAPTER 6

CONCLUSION

Work will be continued at the summer holiday with a lot of work and assiduity.

We are sure that…………………………

Everything can be done with our figure tips.