20141127

12
SurfaceLink: Using Inertial and Acoustic Sensing to Enable Multi-Device Interaction on a Surface ACM CHI 2014

Transcript of 20141127

SurfaceLink: Using Inertial and Acoustic Sensing to Enable Multi-Device Interaction on a Surface ACM CHI 2014

Let’s Kick It: How to Stop Wasting the Bottom Third of Your Large Scale Display ACM CHI 2014

Prototype

a) We propose three types of input, each ergonomically fitting one interactive region: hand interaction, foot taps, and foot gestures; (b) and (c) shows an example technique: kicking an object in the lower portion of the display causes it to pop-up to the finger position.

How it worksToe lift gesture: The user slides their foot under the display (a), lifts their toes (b), causing objects to rise (c).

Foot slide: the user places their foot under the screen (a) and slides it horizontally. The object follows (b). Foot Pull: the user removes objects by pulling them away from screen (c).

The prototype (d) is comprised of: surface microphones (a) for classifying finger vs. foot input; IR light laser plane generators (b) for optical multitouch; and a depth camera (c) looking under the surface to capture foot gestures.