Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument...

17
HAL Id: lirmm-00273879 https://hal-lirmm.ccsd.cnrs.fr/lirmm-00273879 Submitted on 16 Apr 2008 HAL is a multi-disciplinary open access archive for the deposit and dissemination of sci- entific research documents, whether they are pub- lished or not. The documents may come from teaching and research institutions in France or abroad, or from public or private research centers. L’archive ouverte pluridisciplinaire HAL, est destinée au dépôt et à la diffusion de documents scientifiques de niveau recherche, publiés ou non, émanant des établissements d’enseignement et de recherche français ou étrangers, des laboratoires publics ou privés. Extending Drag-and-drop to New Interactive Environments: a Multi-display, Multi-instrument and Multi-user Approach Maxime Collomb, Mountaz Hascoët To cite this version: Maxime Collomb, Mountaz Hascoët. Extending Drag-and-drop to New Interactive Environments: a Multi-display, Multi-instrument and Multi-user Approach. RR-08012, 2008. lirmm-00273879

Transcript of Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument...

Page 1: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

HAL Id: lirmm-00273879https://hal-lirmm.ccsd.cnrs.fr/lirmm-00273879

Submitted on 16 Apr 2008

HAL is a multi-disciplinary open accessarchive for the deposit and dissemination of sci-entific research documents, whether they are pub-lished or not. The documents may come fromteaching and research institutions in France orabroad, or from public or private research centers.

L’archive ouverte pluridisciplinaire HAL, estdestinée au dépôt et à la diffusion de documentsscientifiques de niveau recherche, publiés ou non,émanant des établissements d’enseignement et derecherche français ou étrangers, des laboratoirespublics ou privés.

Extending Drag-and-drop to New InteractiveEnvironments: a Multi-display, Multi-instrument and

Multi-user ApproachMaxime Collomb, Mountaz Hascoët

To cite this version:Maxime Collomb, Mountaz Hascoët. Extending Drag-and-drop to New Interactive Environments: aMulti-display, Multi-instrument and Multi-user Approach. RR-08012, 2008. �lirmm-00273879�

Page 2: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Extending drag-and-drop to new interactive environments: amulti-display, multi-instrument and multi-user approachResearch report RR-08012

Maxime Collomb, Mountaz HascoetUniv. Montpellier II – LIRMM – CNRS161 rue Ada, 34 392 Montpellier Cedex 5 - [email protected], [email protected]

Drag-and-drop is probably one of the most successful and generic representations of direct manipulation intoday’s WIMP interfaces. At the same time, emerging new interactive environments such as distributed displayenvironments or large display surface environments have revealed the need for an evolution of drag-and-drop toaddress new challenges. In this context, several extensions of drag-and-drop have been proposed over the pastseveral years. However, implementations for these extensions are difficult to reproduce, integrate and extend.This situation hampers the development or integration of advanced drag-and-drop techniques in applications.

The aim of this paper is to propose a unifying implementation model of drag-and-drop and of its extensions.This model –called M-CIU– aims at facilitating the implementation of advanced drag-and-drop support by offeringsolutions to problems typical of new emerging environments. The model builds upon a synthesis of drag-and-drop implementations, an analysis of requirements for meeting new challenges and a dedicated interaction modelbased on instrumental interaction. By using this model, a programmer will be able to implement advanceddrag-and-drop supporting (1) multi-display environments, (2) large display surfaces and (3) multi-user systems.Furthermore by unifying the implementation of all existing drag-and-drop approaches, this model also providesflexibility by allowing users (or applications) to select the most appropriate drag-and-drop technique depending onthe context of use. For example, a user might prefer to use pick-and-drop when interacting with multiple displaysattached to multiple computers, push-and-throw or drag-and-throw when interacting with large displays andpossibly standard drag-and-drop in a more traditional context. Finally, in order to illustrate the various benefitsof this model, we provide an API called PoIP which is a Java-based implementation of the model that can be usedwith most Java-based applications. We also quickly describe Orchis, an interactive graphical application used toshare bookmarks and that uses PoIP to implement distributed drag-and-drop like interactions.

1. Introduction

Even though drag-and-drop has been inte-grated widely, implementations vary significantlyfrom one windowing system to another or fromone toolkit to another (Collomb and Hascoet,2005). This situation is worse for drag-and-dropextensions such as pick-and-drop, drag-and-pop,push-and-throw, etc. As far as these extensionsare concerned, very little support if any is usuallyprovided and implementations are hard to reuse,generalize or extend as new needs arise.

As the number of drag-and-drop extensionsis increasing, it is important to propose a uni-fied framework that clarifies the field and offers

benefits from at least three perspectives: user’sperspective, design perspective and implementa-tion perspective. From a user’s perspective, sucha unification will hopefully make it possible forusers to choose the type of drag-and-drop theylike best or that best suits some specific environ-ment or task. From a designer’s perspective, aunifying framework will help better understanddifferences between the possible techniques andthe design dimensions at stake. Lastly, from theprogrammer’s perspective, a unifying implemen-tation model should save a lot of time and effortsin the development of different of drag-and-dropextensions in new emerging and challenging envi-ronments such as large displays and distributed

1

Page 3: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

2 M. Collomb, M. Hascoet

display environments. The aim of this paper isto propose the basis for building such a frame-work, including a unified and open implementa-tion model.

In the following section, we discuss how emerg-ing interactive environments bring new challengesfor the drag-and-drop paradigm, we review mostextensions proposed so far and propose a uni-fied framework for comparing them. In the nextsection, we present a new implementation modelthat not only builds upon an analysis of re-quirements for adapting to new emerging inter-active environments but also accounts for imple-mentation models of existing solutions in mostwidespread windowing systems or toolkits (Col-lomb and Hascoet, 2005). This model is based onthe definition of instruments (Beaudouin-Lafon,2000) that embody interaction techniques. Thepresentation of the implementation model at ageneric level is based on a generic type of in-strument that embodies basic drag-and-drop in-teraction styles. We further discuss how specificemerging drag-and-drop extensions can be imple-mented as specific instruments that are smoothlyintegrated into the model.

2. New challenges for the drag-and-dropparadigm

Most challenges that drag-and-drop has facedrecently can be attributed to the emergence ofnew display environments such as wall-size dis-plays or distributed display environments (DDE).DDE have been defined (Hutchings et al., 2005)as

“Computer systems that present out-put to more than one physical dis-play.”

This general definition covers a broad range ofsystems. Consequently, systems that can be stud-ied in this field can exhibit huge differences. Anexample of such a huge difference is the differencebetween two typical types of DDE: (1) multipledisplays attached to the same machine and (2)multiple displays attached to different machines.While both configurations can be considered as

DDE, the degree of integration of the distinct dis-plays is very different. In the first case, the differ-ent displays are handled within the same window-ing system, offering a single workspace or desktopwith full communication capacities between win-dows of the different displays. In the second case,on the contrary, the two displays are handled bytwo distinct and potentially heterogeneous win-dowing systems, making it much more difficult tooffer similar single workspace spanning the differ-ent displays. These two different configurationslead to significantly different types of problemswhen implementing drag-and-drop in these envi-ronments.

2.1. Preliminary definitions

In order to better characterize the types ofproblems that are most challenging for drag-and-drop and its variants, preliminary definitions areuseful. In this paper, we use a generally agreeddefinition of the term display : a physical deviceused to display information. The term window isused to refer to an area of a display devoted tohandle input and output from various programs.

Based on these definitions, we define a sur-face as a set of windows. We further define theterm distributed surface as a surface with two ad-ditional properties: (1) windows of the surfacepotentially appear on different displays attachedto different machines handled through differentwindowing systems, (2) interactions between win-dows handled by different windowing systems istransparent for the user, e.g it is similar to singleinteractions that happen in a single workspace.

For example, a set of windows spanning threedifferent displays handled by three different ma-chines, such as a laptop running MacOS, a PCrunning X-Windows and a tablet PC runningWindows can be considered as a distributed sur-face as soon as a system makes it possible to sur-pass the boundaries of each windowing system tosupport some interactions between different win-dows on different displays. It is important tostress that systems handling distributed surfaceenvironments are de-facto potentially multi-usersystems. Indeed, each machine involved in thesurface can be controlled by a different user. To

Page 4: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 3

some extent they can be considered as a specifictype of groupware environment.

Our aim is to propose solutions for typicalproblems arising when designing and implement-ing a drag-a-drop like interaction in a distributedsurface environment. These problems can bestructured in three categories: (1) usability andscalability problems, (2) multi-computer and in-teroperability problems and (3) multi-user andconcurrency problems.

2.2. Usability and scalability issues

Usability problems derive from the complexityof new environments and the usage variety: het-erogeneity of displays both in terms of size, num-ber and nature, heterogeneity of users in terms ofabilities, experience and style. Over years, drag-and-drop basic paradigm has been extended withnew interaction styles. Most of these extensionsaddress specific needs for particular display envi-ronments. Consequently, we now have interestingalternatives to the drag-and-drop original stylethat best suits some contexts. These alternativeswill be discussed in section 2. One benefit of ourapproach is to support different styles of interac-tion in a unified implementation model so thatshifting from one interaction style to another isfacilitated. This feature can be seen as a partic-ular support for plasticity:

“the capacity of a user interface towithstand variations of both the sys-tem physical characteristics and theenvironment while preserving usabil-ity”

(Thevenin and Coutaz, 1999).Other types of usability problems that arise

with emerging environments can be considered asscalability due to the increasing space available onlarge wall-sized displays: to what extent a tech-nique originally designed to work on one singleand relatively low resolution display will adaptto an increasing number, size and resolution ofdisplays? When such displays are used with di-rect pointing devices, e.g in the iRoom (StanfordUniversity, 2008) or DynaWall (Fraunhofer insti-tute, 2008), the original drag-and-drop paradigm

reaches its limits : interactions that involve drag-ging objects tend to be particularly tedious anderror-prone (Collomb and Hascoet, 2004; Collombet al., 2005) and can be further complicated bythe bezels separating screen units (Baudisch etal., 2003). Drag-and-drop might even fail whentargets are out of reach, e.g. located too high ortoo low on a display. Further, user performancesin terms of time necessary to complete a task areknown to decrease as the size of displays increasesbecause they induce greater distances betweentargets and sources and target acquisition timeis known to increase with distance (Fitts, 1992).

2.3. Distributed display surfaces: trans-parently integrating multiple comput-ers and multiple windowing systems

Multi-computer and multi-windowing systemproblems are probably the most challenging prob-lems that have to be addressed to handle dis-tributed surfaces. Because of the difficulty insurpassing the boundaries of windowing systemsassociated with each display, there is still very lit-tle support for making windows of a distributedsurface behave as if they were part of the sameworkspace. Some systems (Shoeneman, 2008; Jo-hanson et al., 2002; Lachenal, 2004) aim to sup-port communication between displays based onredirection of input/output mechanisms, but sup-port is still at its early stage.

Other approaches like those found in dis-tributed visualization environments providemulti-head support for multiple displays at-tached to different machines (Xdmx project,2008; Humphreys et al., 2002). These systemsprovide advanced support for distributed sur-faces. However, they do not have the flexibilityneeded to handle heterogeneous or dynamic dis-tributed surfaces. Indeed, they impose strictconstraints on the architecture of clusters of ma-chines used and on the windowing systems orgraphic toolkit that is run by these machines.Clearly they are not aimed at handling evolvingsets of machines running heterogeneous window-ing systems and graphical toolkits. Our model,on the contrary, imposes no particular constraintson the machines involved in distributed surfaces

Page 5: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

4 M. Collomb, M. Hascoet

and it also supports evolving configurations sothat windows from new machines can be dy-namically added or removed from a distributedsurface.

2.4. Multi-user issues

Amongst all problems that arise with new dis-play environments, some multi-user problems aretypical of the field of computer-supported col-laborative work (CSCW) and have been studiedover the past decades (Greenberg and Marwood,1994). As far as drag-and-drop is concerned, theimportant aspect of multi-user support is thatone is able to distinguish event streams from dif-ferent users. By fulfilling this requirement, ourmodel can also be used by a programmer willingto integrate advanced drag-and-drop features ina CSCW context.

3. Drag-and-drop extensions

Problems listed in the previous section are par-tially addressed by a set of drag-and-drop exten-sions that have been proposed over the past 10years. In this domain, it is possible to distin-guish between different types of approaches de-pending on whether the underlying interactionmodel is target-oriented, source-oriented or undi-rected. This section rapidly reviews the variousextensions proposed recently according to thesethree categories. The next section further com-pares these extensions according to more detaileddimensions.

3.1. Target-oriented interaction

Hereafter target-oriented interaction refers toan interaction style in which the main focus andfeedback is located around potential targets lo-cations. In most cases, with target-oriented in-teraction, users have to adjust their move con-tinuously around potential target locations to fi-nally acquire the right target at its real location.These continuous adjustments may have markedimpacts on the systems since they imply high re-fresh rates: a little lag between the user’s handmovement and the feedback would significantly

decrease the usability of such interaction style.Furthermore, to be effective, target-oriented in-

struments should be used with displays wheretargets are all roughly equally within sight. Invery large wall-size displays it might be diffi-cult to clearly distinguish targets very far awayfrom the source location. In such environ-ments, target-oriented instruments would fail andsource-oriented instruments would be more suit-able.

In this section, we quickly review most recentextensions that belong to this category: throwing,drag-and-throw and push-and-throw.

Figure 1. (Left to right, top to bottom) Exam-ples of (a) hyperdragging, (b) stitching, (c) drag-and-pop, (d) vacuum (black arrows are added),(e) push-and-throw and (f) push-and-pop. (re-productions with authors permission)

Geißler (Geißler, 1998; Streitz et al., 1999) pro-posed three techniques to work more efficientlyon interactive walls. The goal was to limit phys-

Page 6: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 5

ical displacement of the user on a 4.5 x 1.1 mtriple display (the DynaWall (Fraunhofer insti-tute, 2008)). The first technique is shuffling. It isa way of re-arranging objects within a medium-sized area. Objects move by one length of theirdimensions in a direction given by a short strokeby the user on the appropriate widget. Next, theauthor proposes a throwing technique. To throwan object, the user has to achieve a short strokein direction opposite to which the object shouldbe moving, followed by a longer stroke in the cor-rect direction. The length ratio between the twostrokes determines the distance to which the ob-ject will be thrown. According to the author,this technique requires training to be used in anefficient way. The third technique, taking, is anapplication of pick-and-drop (see section 3.3) tai-lored to the DynaWall.

Drag-and-throw and push-and-throw (Hascoet,2003; Collomb and Hascoet, 2004) are throwingtechniques designed for multiple displays (one ormore computers). They address the limitationof throwing techniques (Geißler, 1998; Streitz etal., 1999) providing users with a real-time pre-view of where the dragged object will come downif thrown. These techniques are based on visualfeedbacks, metaphors and the explicit definitionof trajectories (fig. 1–e). Three types of visualfeedback are used: trajectory, target and take-offarea (area that matches to the complete display).Drag-and-throw and push-and-throw have differ-ent trajectories: drag-and-throw uses the archerymetaphor (user performs a reverse gesture - tothrow an object on the right, the pointer has tobe moved to the left) while push-and-throw usesthe pantograph metaphor (user’s movements areamplified). The main strength of these techniquesis that the trajectory of the object can be visual-ized and controlled before the object is actuallysent. So users can adjust their gesture before val-idating it. Therefore, contrary to other throwingtechniques, drag-and-throw and push-and-throwhave very low error rates (Collomb and Hascoet,2004).

3.2. Source-oriented interaction

We use the term source-oriented to character-ize drag-and-drop extensions where the focus re-mains around the source object’s original loca-tion. Contrary to target-oriented interaction, asource-oriented interaction usually does not in-volve continuous adjustments: a single fairly dra-matic movement from the user is usually suffi-cient to complete the task. However, in somecases, e.g. in drag-and-pop, different drag direc-tions cause the tip cluster to be a little differentevery time, users might need to re-orient once ortwice to identify the correct tip icons. For such in-teraction styles, additional feedback (e.g. rubberbands) is usually provided to help in identifyingthe real location of the objects that correspondto the ghost.

In this section we review quickly most recentextensions that can be considered target-oriented:drag-and-pop, vacuum, and push-and-pop.

Drag-and-pop (Baudisch et al., 2003) is in-tended to help drag-and-drop operations whenthe target is impossible or hard to reach, e.g.,because it is located behind a bezel or far awayfrom the user. The principle of drag-and-pop isto detect the beginning of a drag-and-drop and tomove potential targets toward the user’s currentpointer location. Thus, the user can interact withthese icons using small movements. As an exam-ple, the case of putting a file in the recycle bin,the user starts the drag gesture toward the recyclebin (fig. 1–c). After a few pixels, each valid targeton the drag motion direction creates a linked tipicon that approaches the dragged object. Userscan then drop the object on a tip icon. Whenthe operation is complete, tip icons and rubberbands disappear. If the initial drag gesture hasnot the right direction and thus the target iconis not part of the tip icons set, tip icons can becleared by moving the pointer away from thembut the whole operation has to be restarted toget a new set of tip icons.

The vacuum (Bezerianos and Balakrishnan,2005) (figure 1–d), a variant of drag-and-pop, isa circular widget with a user controllable arc ofinfluence that is centered at the widget’s point ofinvocation and spans out to the edges of the dis-play. Far away objects standing inside this influ-ence arc are brought closer to the widget’s center

Page 7: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

6 M. Collomb, M. Hascoet

in the form of proxies that can be manipulated inlieu of the originals.

Push-and-pop (Collomb et al., 2005) was cre-ated to combine the strengths of drag-and-popand push-and-throw techniques. It uses the take-off area feedback from push-and-throw while op-timizing the use of this area (fig. 1–f): it containsfull-size tip icons for each valid target. The no-tion of valid target and the grid-like arrangementof tip icons are directly inherited from drag-and-pop’s layout algorithm. The advantage over drag-and-pop is that it eliminates the risk of invokinga wrong set of targets. And the advantage overpush-and-throw is that it offers better readability(icons are part of the take-off area), target acqui-sition is easier (Collomb et al., 2005) and userscan focus on the take-off area.

3.3. Undirected interaction

Lastly, some interaction styles are neithersource-oriented nor target-oriented. We call themundirected since feedback will not be concentratedin one particular area of the display. Pick-and-drop, stitching and hyperdragging are the mainexamples of extensions that fall into this category,and we review them in this section.

Pick-and-drop (Rekimoto, 1997) has been de-veloped to allow users to extend drag-and-drop todistributed environments. While drag-and-droprequires the user to remain on the same computerwhile dragging objects around, pick-and-drop letshim move objects from one computer to anotherusing direct manipulation. This is done by givingthe user the impression of physically taking an ob-ject on a surface and laying it on another surface.Pick-and-drop is closer to the copy-paste interac-tion technique than to drag-and-drop. Indeed likethe copy/paste operation, it requires two differ-ent steps: one to select the object to transfer, andone to put the object somewhere else. But pick-and-drop and drag-and-drop share a common ad-vantage over copy-paste techniques: they avoidthe user having to deal with a hidden clipboard.However, pick-and-drop is limited to interactivesurfaces which accept the same type of touch-pendevices and which are part of the same network.Each pen has a unique ID and data is associated

with this unique ID and stored on a pick-and-dropserver.

Hyperdragging (Rekimoto and Saitoh, 1999)(figure 1–a) is part of a computer augmented en-vironment. It helps users smoothly interchangedigital information between their laptops, tableor wall displays, or other physical objects. Hy-perdragging is transparent to the user: whenthe pointer reaches the border of a given displaysurface, it is sent to the closest shared surface.Hence, the user can continue his movement as ifthere was only one computer. To avoid confusiondue to multiple simultaneous hyperdragging, theremote pointer is visually linked to the computercontrolling the pointer (simply by drawing a lineon the workspace).

Stitching (Hinckley et al., 2004) (figure 1–b) is an interaction technique designed for pen-operated mobile devices. The devices that allowit to start a drag-and-drop gesture on a screenand to end the gesture on another screen. De-vices have to support networking. A user startsdragging an object on the source screen, reachesits border, then crosses the bezel and finishesthe drag-and-drop on the target screen. The twoparts of the strokes are synchronized at the endof the operation and then bound devices are ableto transfer data.

4. Comparison of extensions

The extensions presented in the previous sec-tion differ in several ways. The instrumental in-teraction model (Beaudouin-Lafon, 2000) can beuseful to exhibit dimensions for a better compar-ison of their interaction styles. In this section,we quickly review the instrumental interactionmodel. Based on this model, we exhibit dimen-sions such as instrument feedback and instrumentcoverage. By considering these dimensions andothers (drag-over and drag-under types of feed-back), we further draw a comparison of previousapproaches which is summarized in Table of Fig-ure 3.

4.1. Instrumental interaction

Page 8: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 7

Instrumental interaction consists of describinginteractions through instruments. An instrumentcan be considered as a mediator between the userand domain objects. The user acts on the instru-ment, which transforms the user’s actions intocommands affecting relevant target objects. In-struments have reactions that enable users to con-trol their actions on the instrument, and providefeedback as the command is carried out on targetobjects (see figure 2).

Instrument

Domain object

reactionaction

command response

feedback

Figure 2. Interaction instrument mediates theinteraction between user and domain objects(Beaudouin-Lafon, 2000).

Different extensions of drag-and-drop can beembodied through different instruments. Interac-tions between an instrument and domain objects(commands/responses) are the same for drag-and-drop and all the extensions presented previ-ously, i.e. all instruments support primitive andgeneric commands: source selection, target selec-tion, specification of type of action, data transfer(validation of the selected target) and cancella-tion.

The most important part of typical drag-and-drop interactions concerns interactions betweenthe user and the instrument (principally reac-tions and feedback). Reactions and feedback ofinstruments involve three types of feedback: drag-under feedback, drag-over feedback and instru-ment feedback. When a user needs to change

from one instrument to another, drag-under anddrag-over visual effects might roughly be pre-served, but instrument feedback and reactionvary significantly. For simplification, in the fol-lowing sections we assimilate instrument feedbackand reaction into a single concept we refer to asinstrument feedback.

4.2. Drag-under and drag-over feedback

In regular drag-and-drop operations, feedbackis usually referred to as drag-under feedback anddrag-over feedback. Drag-over feedback consistsmainly of feedback that occurs on a source object.Typically, during a regular drag on a source, thepointer shape changes into a drag icon or ghostthat represents the data being dragged. Thisicon can change during a drag to indicate thecurrent action (copy/move/alias). Hence, drag-over feedback mainly consists of shape and colorof source ghost changes when the user changesthe type of action, or when drop becomes pos-sible/impossible. Some windowing systems maygo a step beyond by providing animation, e.g. toindicate that the action was canceled, they mayanimate ghosts back to their original location. Itis interesting to note that even though the drag-and-drop model is mature, not all windowing sys-tems offer this feature. When no animation isprovided, it is significantly more difficult for theuser to follow the effect of a cancel operation.

Drag-under feedback denotes the visual effectsprovided on the target side. It conveys informa-tion when a potential target has a drag icon pass-ing through it. The target can respond in manyways: by modifying its shape and color or even inmore sophisticated cases by performing actions.For example, when moving a file over a set offolders, when the file remains above a particu-lar folder a sufficient amount of time, the foldermight open up to let the user recursively explorethe file hierarchy to the desired target folder.

If drag-over and drag-under visual effects aresufficient to describe feedback in the case of reg-ular drag-and-drop operations, they are not formost of its recent extensions. In the latter case,more feedback is needed. This additional feed-back is the instrument feedback mentioned previ-

Page 9: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

8 M. Collomb, M. Hascoet

Issues primarily addressed Interaction style Distributed

surface Multi-User

Scala-bility

Instrument feedbacks

Instrument Coverage Category

Pick-and-drop [20] � � Ghost Partial undirected

Hyperdragging [21] � � Line Full undirected

Stitching [14] � Trajectory, screen frame, pie menu Partial undirected

Throwing [11] � None Full Target-oriented

Drag-and-pop [1] � � Rubber-bands, tip icons Targets Source-oriented

Vacuum [3] � Arc of influence, proxies Full Source-oriented

Drag-and-throw [13] � � Take-off area, trajectory

Full Target-oriented

Push-and-throw [13] � � Take -off area, trajectory Full Target-oriented

Push-and-pop [6] � � Take-off area, tip icons Targets Source-oriented

Figure 3. Comparison of drag-and-drop extensions.

ously and will be described in the next section.

4.3. Instrument feedback

Instrument feedback is useful to provide userswith better control over their actions. Instrumentreactions or feedback can be considered as a spe-cific type of recognition feedback. As suggestedby (Olsen and Nielsen, 2001),

“Recognition feedback allows users toadapt to the noise, error, and miss-recognition found in all recognizer-based interactions”

. Such feedback includes, for example, the rub-ber bands that are used in the case of drag-and-pop to help users in locating/identifying potentialtargets. Another example is the case of throw-ing, where take-off areas as well as trajectoriesare displayed to help users adjust target selec-tion, etc. Such feedback is used in other exten-sions and varies significantly from one particularinstrument to another. Table of Figure 3 summa-rizes these differences.

4.4. Instrument coverage

All instruments described above do not supportfull coverage. By coverage, we mean: areas of asurface where an instrument can drop an object.

Some instruments have partial coverage. Forexample, in wall-size displays with touch/pen in-put, coverage is partial since some areas might beout of reach e.g situated too high.

Other instruments, mainly target oriented in-struments, may offer coverage limited to targets.For example, with push-and-pop, only potentialtargets can be reached. With such instruments,source objects cannot be dropped to any otherarea of the surface. There are many contexts ofdrag-and-drop situations where no specific targetis aimed and where such instruments would fail,so it is important to consider this issue.

Finally, instruments that make it possible toreach all areas of the display will be consideredfull coverage instruments. Table of Figure 3shows the different types of coverage (partial cov-erage, full coverage and coverage limited to tar-

Page 10: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 9

gets) of most existing drag-and-drop extensions.

5. M-CIU model and PoIP API

The implementation model we propose is calledM-CIU1 and addresses the different issues dis-cussed in section 2. It offers a unified frameworkthat makes it possible to implement every drag-and-drop extension discussed previously. Henceshifting from one interaction style to another isfacilitated.

The M-CIU model is implemented as anAPI (application programming interface) calledPoIP2. PoIP supports drag-an-drop like manipu-lations in different environments and can be usedin the development of most Java-based appli-cations. PoIP is implemented in Java3. Eventhough we used PoIP to illustrate the M-CIUmodel and to provide more details when useful,we believe that our model is general enough tobe implemented at other levels or in other pro-gramming languages or toolkits. PoIP is availablefor download with an example of use (Collomb,2008).

5.1. Overview of M-CIU model

The M-CIU model is based on four key en-tities: instruments, drag-and-drop managers,shared windows, distributed surface server andtopology manager. We quickly present these en-tities in this section and will provide more detailsin the next subsections.

Instruments embody interaction techniques. Ahierarchy of instruments (see section 5.2) is pro-vided to factorize most common implementationdetails shared by different interaction techniques.In order to support distribution, each instrumentincludes one master and several slave instrumentswhich will further be described in section 5.2.

Drag-and-drop managers play a central part asthey are used to coordinate all other main en-tities of the model. Every computer involvedin a distributed drag-and-drop in our model has1Multi-Computer, multi-Instrument, and multi-U ser2Pointer Over IP3PoIP relies on RMI (Remote Method Invocation) for net-work communication and AWT for events

to run a drag-and-drop manager. These man-agers are responsible for the registration of sourceand target components (e.g. UI components in-volved in the interaction), they handle the cre-ation/destruction of slave instruments and alsohelp with the redirection of event streams. Adrag-and-drop manager is the single interface fora set of instruments. Indeed, a master instrumentcan change depending on the context, and slaveinstruments are created and destroyed upon usersactivities. The drag-and-drop manager provides astable interface for this set of instruments and fur-ther simplifies communications between the mainmodel entities.

Shared windows are created to display mostfeedback and will be described further in section5.3.

Source and target components can be any basicUI components involved in the interaction pro-vided that they have the capacity of registeringto a drag-and-drop manager.

Distributed surface server and topology man-ager are the entities responsible for handlingshared windows distributed over displays andtheir associated topology. They will be describedin more detail in section 5.3.

5.2. Multi-instrument support and gener-icity

Our approach to usability and scalability prob-lems mentioned in section 2.2 consists of provid-ing a unified implementation model that embod-ies drag-and-drop-like interaction techniques ininstruments. Even though the instrumental in-teraction (Beaudouin-Lafon, 2000) model is pri-marily devoted to describing interactions, someaspects of the model are well suited to structur-ing implementations.

Hence, our approach consists of proposing amulti-instrument model which meets the follow-ing requirements:

• Instruments act upon objects transpar-ently: objects are notified about the ma-nipulation as usual but they are not awareof the type of instrument in use. The ef-fort needed to introduce new instruments isminimal.

Page 11: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

10 M. Collomb, M. Hascoet

• Users can choose the instrument they wantto use, depending on their preferences andthe context (touch display, large display,small display). This choice can be part ofthe user’s profile. It can also be made onthe fly to adapt to an evolving context.

• Several users can manipulate objects at thesame time with different instruments.

Practically, instruments are defined through ahierarchy of classes, all inheriting from a verygeneric instrument class in the same way that inmost UI toolkits all widgets or graphical compo-nents usually inherit from a generic window class.

It is important to note that an instrument em-bodies the implementation of both the interactionand the distribution (multi-computer support).

Interaction

An instrument receives an input stream (e.g.events from a mouse and a keyboard) and pro-cesses them to implement interactions. Figure 4presents two state diagrams (Muller and Gaert-ner, 2003) that depict interaction models forpush-and-throw and push-and-pop. Actually, thefirst state diagram could also stand for the drag-and-drop and drag-and-throw interaction modelsand the accelerated push-and-throw interactionmodel is very close to the second diagram.

drag-and-drop-like state diagrams share severalcommon points due to the actual nature of the in-teraction techniques that they embody. However,some differences between instruments are visiblein these diagrams, e.g. an additional state forpush-and-pop. The most important differencesbetween instruments concerns the processing ofinput events and associated feedbacks.

5.3. Multi-computer support and interop-erability

In order to address the multi-computer issuesdiscussed in section 2, one preliminary require-ment is to support some sort of interoperabilitybetween windowing systems. In our model, in-teroperability is based on (1) implementation ofdistributed surfaces and (2) slave instruments.

idle

inactive

dragging(push-and-pop)

mouse-Dragged(x, y)

mouse-Down(xs, ys)

mouseUp()

[sourceAccept-Action = false]

[sourceAccept-Action = true]

mou

seU

p()mouse-

Dragged(x, y)[dist(xs, ys, x, y)< threshold]

dragging(acc. push-and-throw)

mouseDragged(x, y)

[dist(xs, ys, x, y) < threshold]

mouseUp()

mou

seU

p()

mouse-Dragged(x, y)

[dist(xs, ys, x, y)> threshold]

[dist(xs, ys, x, y)> threshold]

idle

inactive

dragging

mouse-Dragged(x, y)

mouse-Down(xs, ys)

[sourceAccept-Action = false]

[sourceAccept-Action = true]

mouse-Dragged(x, y)

[dist(xs, ys, x, y)< threshold]

mouseUp()

[dist(xs, ys, x, y)> threshold]

mouseDragged(x, y)

mouseUp()

Figure 4. State diagram for the push-and-throw(top) and push-and-pop (bottom).

Distributed surfaces

As defined previously, a distributed surface isa set of windows possibly displayed by differentwindowing systems and behaving as if they werepart of the same workspace. In particular, a drag-and-drop interaction can transparently start withone window of the distributed surface handled inone windowing system and ends on another win-dow operated on another windowing system.

Page 12: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 11

Shared windows

So far we have used the term window in a gen-eral way. We now need to refine the conceptand introduce the term shared window to providemore details on implementation. A shared win-dow is used to make it possible for a given com-mon window or graphic component to be part of adistributed surface. A shared window has a nameand a unique ID. Shared windows act on their as-sociate windows or components in two ways:

• Redirection of input events received inthe associated window4. This mechanismallows a pointer to move across the dis-tributed surface, thus transparently sur-passing windowing system boundaries.

• Rendering feedbacks. A transparentpane is laid on top of the window in orderto render multiple pointers. This pane isalso made available for instruments to im-plement different types of feedbacks, espe-cially to perform drag-over feedback and in-strument feedback.

Distributed surface Server

A distributed surface is useful for establish-ing connections between the different shared win-dows independently of their associated window-ing system. Shared windows make a contin-uous workspace using a given topology. Thisworkspace can be distributed between multiplecomputers, used by multiple users, each with dif-ferent input devices (i.e. multi-computer, multi-user).

The distributed surface server is used to:

• manage windows IDs. An ID identifies bothwindows and associated input devices. AnID is assigned to a window when the win-dow is registered on the server.

• maintain a list of shared windows to ensure4Input redirection is the transmission of an input streamso it can be treated on a remote window. Input redirec-tion involves (1) capturing events on a source window (2)transmitting events from the source window to the targetwindow and (3) analyzing events on the target window.Source and target windows can be the same window.

that each time a shared window registers orunregisters all other windows are notified.

• handle the topology of windows within thesurface according to a topology manager.

Master and slave instruments

In order to support multi-computer environ-ments, instruments are decomposed into one mas-ter instrument and several slave instruments. Thenumber of slave instruments depends on the con-text and more specifically on the number of dif-ferent computers potentially involved in the dis-tributed surface. Most generic levels of commu-nication between slaves and masters are handledat the most abstract classes of instruments, butmore specific communication is left to more spe-cific classes of instruments. Indeed, communica-tions between masters and slaves may vary a lotboth in terms of nature and of frequency from oneinstrument to another.

A given drag-and-drop manager handles oneand only one master instrument and a variablenumber of slave instruments, depending on thenumber of running drag-and-drop interactions.As shown in figure 5, a master instrument ismainly devoted to dispatching input events andhandling associated slave instruments, while slaveinstruments do the real job, e.g. find which com-ponent is at a given location or perform adequatefeedback in the relevant shared window. A masterinstrument requests the creation of slave instru-ments when a drag-and-drop-like manipulation isdetected and asks for their destruction at the endof the manipulation. Thus, a master instrumenthandles n slave instruments during manipulationwhere n is the total number of shared windows inthe distributed surface.

5.4. Multi-user support and concurrency

The M-CIU model allows multiple users tointeract simultaneously on a distributed sur-face5. This can be achieved by augmenting event

5The number of users cannot be higher than the num-ber of computers involved since only one input stream ismanaged on each computer.

Page 13: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

12 M. Collomb, M. Hascoet

Window #1 Window #2

Push-and-throw slave #2

Drag-and-drop slave #2

Push-and-throw master

RMI communication

DndManager #2

Push-and-throw slave #1

Drag-and-drop slave #1

Drag-and-drop master

DndManager #1

Complete instrument Input events stream

Figure 5. Example of two concurrent drag-and-drop-like manipulations on a distributed surfacecontaining two windows.

streams with the ID of the input device fromwhich they originally started. This feature cer-tainly enables multiple users to interact using sev-eral different instruments to perform drag-and-drop-like operations but it does not solve all prob-lems that should be handled at the level of eachapplication to ensure that concurrency results in acoherent behavior. Handling further concurrencyissues is beyond the scope of this work.

5.5. A complete example of a typicalmulti-instrument, multi-display andmulti-user interaction with the M-CIU model

Lets consider an example where two users arediscussing a project and meeting in a room wherea wall-sized display with touch capacities is avail-able. User A is using his own laptop and userB is using the wall-sized display and the associ-ated computer. At one point, user A wants togive user B data displayed on the screen of hislaptop. Conversely, user B wants to give userA data displayed on the wall-sized display. Withhis laptop, user A uses a simple regular drag-and-drop interaction style while user B, on the con-trary, prefers to use a push-and-throw interactionstyle with the wall-sized touch display. To han-dle drag-and-drop-like operations between bothusers, two different computers are involved withtwo different applications and two different in-teraction styles. User A starts a drag-and-dropfrom computer A and the target component is

handled by application B running on computerB. At the same time, user B starts a push-and-throw with his pen from computer B toward anapplication running on laptop A. This exampleof a typical distributed drag-and-drop like oper-ation is depicted in figure 5. Using the M-CIUmodel, all interactions are handled through themain entities of the model described in the pre-vious sections and involve 5 steps: initialization,drag detection, drag, drop, finalization.

Initialization

Before any drag-and-drop-like operation starts,application A and B are launched and a few el-ements needed for drag-and-drop-like operationsare created once: the source listener, the targetlistener, and the master instruments. While theseoperations are required only once, it is still pos-sible to change these elements later, e.g. a mas-ter instrument can be changed dynamically ac-cording to user preferences. In this initializationstep, source and target components have to reg-ister themselves on the drag-and-drop Managers.

Drag detection

The beginning of the drag-and-drop of user Ais detected by the master instrument of computerA (which receives all input events of computerA). Respectively, the beginning of the push-and-throw of user B is detected by the master instru-ment of computer B. When the beginning of theinteraction is detected, associated source compo-nents are notified and respond. Once source com-ponents have accepted the operation, the masterinstruments ask for the creation of all necessaryslave instruments. As a result, each drag-and-drop manager handles two slave instruments: onefor drag-and-drop of user A and one for push-and-throw of user B.

Drag

During the drag process, master instrumentsnotify slave instruments that the pointer is mov-ing. This type of redirection ensures that the theslave instrument can perform adequate feedbackwherever the pointer moves. When the pointermoves over a potential target component, both

Page 14: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 13

the source and target component are notified.

Drop

At the end of the operation targets are noti-fied and data transfer from source to target cantake place. At this stage, master instruments alsoask for the destruction of all slave instruments inthe same way as they previously asked for theircreation.

Finalization

When application A and B are closed, sourceand target components unregister themselvesfrom associated drag-and-drop Managers. Thishappens only once per session whereas creationand deletion of slave instruments happens everytime a new drag-and-drop-like operation is per-formed.

6. An application: Orchis

Orchis is an interactive and collaborativegraphical application designed to share bookmarkcollections. Its architecture is client/server andmost data is stored on the server side. Otherweb-based bookmarking clients have been imple-mented as well to access the same data. However,Orchis offers most graphical features associatedwith more direct manipulation style. Hence weconsider Orchis as a good place to illustrate theuse of our multi-instrument, multi-user, multi-computer model.

6.1. Scenario

Let’s consider the situation where several usersgather bookmarks that deal with web site design,for example. Each of them owns a laptop andthey regularly meet and use an interactive wall-sized display to compose a common repository.

In this context, one distributed surface serverruns on the computer associated with the wall-sized display and Orchis runs on every computerinvolved in further interactions (e.g. all laptopsand the computer associated with the wall-sizeddisplay). Orchis displays windows such as thewindows depicted on Figure 6. The topology ofshared windows determines how one pointer can

move from one window on one computer to an-other window on another computer and is shownon Figure 7 for two users and one wall-sized dis-play.

1

2

311 22 33

Figure 7. Topology of three Orchis windows (1)in the meeting room and (2) as managed by thedistributed surfaces server.

At one point, one user using regular drag-and-drop copies his bookmarks dealing with web de-sign to the wall-sized display. Using acceleratedpush-and-throw, another user does the same (Fig-ure 6).

Further, one of the users organizes bookmarksgathered on the wall-sized display by directly us-ing the touch device associated with the display.Accelerated push-and-throw is defined as the pre-ferred technique for the wall-sized display so thisuser operates using accelerated push-and-throw.By using the mouse of his laptop, another usercan bring his pointer on the shared window ofthe wall-sized display and occasionally helps thefirst user in the organization task.

At the end of the process, the resulting book-mark collection is saved in the database so it canbe reused later by both Orchis and all other con-nected web clients.

6.2. Discussion

Orchis is an excellent example of what can bedone with PoIP API:

Page 15: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

14 M. Collomb, M. Hascoet

Figure 6. Three users working with Orchis on which two bookmark are copied using accelerated push-and-throw and regular push-and-throw. A view of the right user’s window is added to render the workspaceas seen by the right user.

• Several windows from different computerscan set up a seamless distributed surface.

• This surface can be used simultaneously byseveral users.

• Each user can choose his preferred drag-and-drop-like interaction technique. Notethat Orchis only uses instruments that of-fer full coverage (see table of Figure 3).

One limitation is that only one input stream ismanaged on a computer. The number of simulta-neous user is limited to the number of computersinvolved in the distributed surface.

7. CONCLUSION

In this paper, we have shown the necessarychanges to drag-and-drop to meet requirementsof new emerging interactive environments. Wehave pointed out issues that arise with these newenvironments. We have further reviewed, com-pared and discussed recent drag-and-drop exten-sions that partially address these issues. Finally,we have proposed the M-CIU model, which is

an implementation model that builds upon theseanalyses to meet the challenging requirements ofnew emerging interactive environments and tomake it possible for a programmer to supportmost extensions of drag-and-drop in a single uni-fied framework.

We provide an API called PoIP that imple-ments the M-CIU model and that can be used inthe development of most Java-based applications.The API has been used in the development of acollaborative bookmarking application called Or-chis. Overall, PoIP was found to be robust, andeven though PoIP uses a layered pane to displaypointers and feedbacks, we did not notice any sig-nificant reduction in performances with the Javaapplications tested.

However, the question of the level at which themodel should be implemented is left open. Ourapproach with PoIP was to implement the modelat the toolkit level. However, considering lowerlevels (windowing system level, window manageror middleware level) would certainly be costly butalso most likely to provide great benefits. Eventhough our model is implemented at the toolkitlevel, we made it general enough to be imple-

Page 16: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

Drag-and-drop evolutions: an implementation model 15

mented at other levels as well.Our model proposes a multi-instrument ap-

proach which is important to address problemsof usability and scalability mentioned in section2. In that context, modularity and genericitywere used to minimize the cost of introducing newdrag-and-drop extensions as new needs arise. Ourmodel further supports multi-computer environ-ments transparently. This is useful to ensure thatdrag and drop operations will be able to occuron distributed surfaces displayed over several dis-tinct computers that are possibly running differ-ent windowing systems. Multi-computer supportis achieved thanks to the combination of sharedsurface management and slave instruments. Fi-nally, our model supports multiplexing of inputstreams thanks to input device ID. Hence, wemake it possible for multiple users to perform dif-ferent types of drag-and-drop operations simulta-neously.

References

P. Baudisch, E. Cutrell, D. Robbins, M. Czer-winski, P. Tandler, B. Bederson, and A. Zier-linger. Drag-and-pop and drag-and-pick: tech-niques for accessing remote screen content ontouch and pen-operated systems. In Proceed-ings of Interact 2003, Sep. 1–5 2003.

Michel Beaudouin-Lafon. Instrumental interac-tion: an interaction model for designing post-wimp user interfaces. In ACM CHI ’00 proceed-ings, pages 446–453, ACM Press, 2000.

A. Bezerianos and R. Balakrishnan. The vacuum:facilitating the manipulation of distant objects.In ACM CHI ’05 Proceedings, pages 361–370,ACM PRESS, 2005.

M. Collomb and M. Hascoet. Speed and accuracyin throwing models. In HCI2004, Design forlife, Volume 2, pages 21–24. British HCI Group, 2004.

M. Collomb PoIP: an API for implement-ing advanced drag-and-drop techniques Inhttp://edel.lirmm.fr/dragging/.

M. Collomb, M. Hascoet, P. Baudisch, and B.Lee. Improving drag-and-drop on wall-size dis-plays. In Proceedings of Graphics Interface2005, Victoria, BC, 2005.

M. Collomb, M. Hascoet. Comparing drag-and-drop implementations. Technical Report RR-LIRMM-05003, LIRMM, University of Mont-pellier, France, 2005.

Joelle Coutaz, Stanislaw Borkowski, and NicolasBarralon Coupling Interaction Resources: anAnalytical Model. In EUSAI’2005, pages 183–188, 2005.

Fraunhofer institute. The dynawall project.http://www.ipsi.fraunhofer.de/ambiente/-english/projekte/projekte/dynawall.html. 2008.

Paul M. Fitts The information capacity of the hu-man motor system in controlling the amplitudeof movement. In Journal of Experimental Psy-chology, volume 47, number 6, June 1954, pp.381-391. (Reprinted in Journal of ExperimentalPsychology: General, 121(3):262–269, 1992).

J. Geißler. Shuffle, throw or take it! working ef-ficiently with an interactive wall. In ACM CHI’98 proceedings, pages 265–266, ACM Press,1998.

S. Greenberg, and D. Marwood. Real time group-ware as a distributed system: concurrency con-trol and its effect on the interface. In ACMCSCW ’94 proceedings, pages 207–217, ACMPress, 1994.

M. Hascoet, M. Collomb, and R. Blanch.Evolution du drag-and-drop : du modeled’interaction classique aux surfaces multi-supports. revue I3, 4(2), 2004.

M. Hascoet. Throwing models for large displays.In HCI2003, Designing for society, Volume 2,pages 73–77. British HCI Group, 2003.

K. Hinckley, G. Ramos, F. Guimbretiere, P.Baudisch and M. Smith. Stitching: Pen Ges-tures that Span Multiple Displays. In Proceed-ings of AVI’04, pages 23–31, ACM PRESS,2004.

Page 17: Extending Drag-and-drop to New Interactive Environments: a ... · multi-display, multi-instrument and multi-user approach Research report RR-08012 Maxime Collomb, Mountaz Hasco et

16 M. Collomb, M. Hascoet

G. Humphreys, M. Houston, R. Ng, R. Frank, S.Ahern, P. D. Kirchner, and J. T. Klosowski.Chromium: a stream-processing framework forinteractive rendering on clusters. In Proceed-ings SIGGRAPH ’02. Pages 693–702, ACMPress, 2002.

D. R. Hutchings, J. Stasko, and M. Czerwinski.Distributed display environments. In interac-tions 12, 6 Nov. 2005, pages 50–53.

Brad Johanson, Greg Hutchins, Terry Wino-grad, and Maureen Stone. Pointright: experi-ence with flexible input redirection in interac-tive workspaces. In ACM UIST ’02 proceedings,pages 227–234, ACM Press, 2002.

C. Lachenal. Modele et infrastructure logiciellepour l’interaction multi-instrument multisur-face. PhD these, Universite Joseph Fourier,Grenoble, France, 2004.

P.A. Muller, and N. Gaertner. Modlisation objetavec UML. Edition Eyrolles. 2003.

Dan R. Olsen and S. Travis Nielsen. Laser pointerinteraction. In ACM CHI’2001 proceedings,pages 17–22, ACM PRESS, 2001.

Jun Rekimoto. Pick-and-drop: a direct manipula-tion technique for multiple computer environ-ments. In ACM UIST ’97 proceedings, pages31–39, ACM Press, 1997.

Jun Rekimoto and Masanori Saitoh. Augmentedsurfaces: a spatially continuous work space forhybrid computing environments. In ACM CHI’99 proceedings, pages 378–385, ACM Press,1999.

B. Shneiderman. Direct manipulation: A step be-yond programming languages. In Proceedings ofthe Joint Conference on Easier and More Pro-ductive Use of Computer Systems. (Part - II):Human interface and the User interface - Vol-ume 1981, ACM Press, 1981.

C. Shoeneman. Synergy.http://synergy2.sourceforge.net/. 2008.

N. A. Streitz, J. Geißler, T. Holmer, S. Konomi,C. Mller-Tomfelde, W. Reischl, P. Rexroth, P.Seitz, R. and Steinmetz. i-LAND: an interac-tive landscape for creativity and innovation.In ACM CHI ’99 proceedings, pages 120–127,ACM PRESS, 1999.

Sony Japan. Flyingpointer.http://www.sony.jp/products/consumer/-pcom/software 02q1/flyingpointer. 2002.

Stanford University ComputerScience. The stan-ford interactive workspaces project.http://iwork.stanford.edu/. 2008.

D. Thevenin and J. Coutaz. Adaptation and Plas-ticity of User Interfaces. In Workshop on Adap-tive Design of Interactive Multimedia Presenta-tions for Mobile Users, 1999.

Xdmx project. Distributed multihead X.http://dmx.sourceforge.net/. 2008.