Skeletons and Silhouettes: Comparing User Representations at a ...

5
Skeletons and Silhouettes: Comparing User Representations at a Gesture-based Large Display Christopher Ackad Faculty of Engineering and Information Technologies The University of Sydney, NSW, 2006, Australia [email protected] Judy Kay Faculty of Engineering and Information Technologies The University of Sydney, NSW, 2006, Australia [email protected] Martin Tomitsch Faculty of Architecture, Design, and Planning The University of Sydney, NSW, 2006, Australia [email protected] ABSTRACT Mid-air gestures offer a promising way to interact with large public displays. User representations are important to at- tract people to such displays, convey interactivity and provide meaningful gesture feedback. We evaluated two forms of user representation, an abstract skeleton and a silhouette, at a large public information display. Results from 56 days, with 190 sessions involving 483 detected people, indicate the silhou- ette attracted more passers-by to interact and, of these, more engaged in serious browsing interactions. By contrast, the skeleton representation had more playful interactions. Our work contributes to the understanding of the implications of these choices of user representation. ACM Classification Keywords H.5.2 Information Interfaces and Presentation: User Inter- faces—Graphical user interfaces, input devices and strate- gies, interaction styles, screen design, user-centered design. Author Keywords Public Displays; Whole Body Interaction; User Representation; Gestural Interaction; Natural User Interfaces INTRODUCTION Mid-air gesture-based interaction offers new ways to interact with large public displays. A user representation is impor- tant in these displays, for several reasons; it attracts people to the display; it helps people realise the display is interactive; and it provides feedback as the user does gestures to control the display [7]. There are two key classes of user represen- tation in public displays [5, 7]. One is appearance match- ing, where the user recognises their own image on a display. This was used in several studies with either a silhouette or video representation (eg [12, 13, 8, 10, 6]). The second is kinesthetic-visual matching where the user matches the mo- tion of an on-screen figure or object with their own move- ment. These include avatars [11, 2, 7] or abstract forms with Permission to make digital or hard copies of all or part of this work for personal or classroom use is granted without fee provided that copies are not made or distributed for profit or commercial advantage and that copies bear this notice and the full cita- tion on the first page. Copyrights for components of this work owned by others than ACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re- publish, to post on servers or to redistribute to lists, requires prior specific permission and/or a fee. Request permissions from [email protected]. CHI’16, May 07-12, 2016, San Jose, CA, USA ©2016 ACM. ISBN 978-1-4503-3362-7/16/05...$15.00 DOI: http://dx.doi.org/10.1145/2858036.2858427 the users’ hands, head or body mapped to on-screen elements [14, 4, 9, 3]. Our work builds on two strands of previous research. First, uller et al. [7] explored the effect of user representa- tions in attracting people, to notice and interact with a game. A lab study compared two kinesthetic-visual representations and two appearance matching representations (mirror and sil- houette). People noticed the appearance matching represen- tations more quickly. In a later in-the-wild comparison of the appearance matching representations, the mirror enticed more people to interact than the silhouette. The second strand comes from a series of in-the-wild studies of a non-game, in- formation browsing interface with a kinesthetic-visual skele- ton representation [1, 2, 11]. When Tomitsch et al. [11] anal- ysed what the long-staying users actually did, they found that many played, often with the skeleton, rather than browse the information. They concluded that the skeleton evoked play- fulness. When Ackad et al. [1, 2] were studying how well the interface supported learning the gestures, they also reported considerable play. Building on these two strands, this paper compares a sil- houette (appearance matching) and a skeleton (kinesthetic- visual) for their effect on playfulness, in an interface designed for browsing hierarchical information. We chose the silhou- ette – similar to one condition in M¨ uller et al. [7] – avoiding the mirror representation which some people found discon- certing [7]. We also analyse the effect of the representation for enticing users to interact, adding to the picture emerging from M ¨ uller et als work, but now comparing this, for the first time, with a kinesthetic-visual representation, rather than two appearance matching representations. Specifically, we tested the following hypotheses. H1: Silhouette facilitates more serious interaction. This is suggested by a lab study [15] where people reported a strong task focus, seeing the silhouette only as an enabler. H2: Skeleton is more playful, so facilitates more play. This is suggested the studies, described above, [1, 2, 11] re- porting playful behaviour, rather than serious browsing. H3: Skeleton causes people to stay longer. This follows from the high rate of play among long-staying users at the browsing interface [11].

Transcript of Skeletons and Silhouettes: Comparing User Representations at a ...

Page 1: Skeletons and Silhouettes: Comparing User Representations at a ...

Skeletons and Silhouettes: Comparing UserRepresentations at a Gesture-based Large Display

Christopher AckadFaculty of Engineering andInformation TechnologiesThe University of Sydney,

NSW, 2006, [email protected]

Judy KayFaculty of Engineering andInformation TechnologiesThe University of Sydney,

NSW, 2006, [email protected]

Martin TomitschFaculty of Architecture,Design, and Planning

The University of Sydney,NSW, 2006, Australia

[email protected]

ABSTRACTMid-air gestures offer a promising way to interact with largepublic displays. User representations are important to at-tract people to such displays, convey interactivity and providemeaningful gesture feedback. We evaluated two forms of userrepresentation, an abstract skeleton and a silhouette, at a largepublic information display. Results from 56 days, with 190sessions involving 483 detected people, indicate the silhou-ette attracted more passers-by to interact and, of these, moreengaged in serious browsing interactions. By contrast, theskeleton representation had more playful interactions. Ourwork contributes to the understanding of the implications ofthese choices of user representation.

ACM Classification KeywordsH.5.2 Information Interfaces and Presentation: User Inter-faces—Graphical user interfaces, input devices and strate-gies, interaction styles, screen design, user-centered design.

Author KeywordsPublic Displays; Whole Body Interaction; UserRepresentation; Gestural Interaction; Natural User Interfaces

INTRODUCTIONMid-air gesture-based interaction offers new ways to interactwith large public displays. A user representation is impor-tant in these displays, for several reasons; it attracts people tothe display; it helps people realise the display is interactive;and it provides feedback as the user does gestures to controlthe display [7]. There are two key classes of user represen-tation in public displays [5, 7]. One is appearance match-ing, where the user recognises their own image on a display.This was used in several studies with either a silhouette orvideo representation (eg [12, 13, 8, 10, 6]). The second iskinesthetic-visual matching where the user matches the mo-tion of an on-screen figure or object with their own move-ment. These include avatars [11, 2, 7] or abstract forms with

Permission to make digital or hard copies of all or part of this work for personal orclassroom use is granted without fee provided that copies are not made or distributedfor profit or commercial advantage and that copies bear this notice and the full cita-tion on the first page. Copyrights for components of this work owned by others thanACM must be honored. Abstracting with credit is permitted. To copy otherwise, or re-publish, to post on servers or to redistribute to lists, requires prior specific permissionand/or a fee. Request permissions from [email protected]’16, May 07-12, 2016, San Jose, CA, USA©2016 ACM. ISBN 978-1-4503-3362-7/16/05...$15.00DOI: http://dx.doi.org/10.1145/2858036.2858427

the users’ hands, head or body mapped to on-screen elements[14, 4, 9, 3].

Our work builds on two strands of previous research. First,Muller et al. [7] explored the effect of user representa-tions in attracting people, to notice and interact with a game.A lab study compared two kinesthetic-visual representationsand two appearance matching representations (mirror and sil-houette). People noticed the appearance matching represen-tations more quickly. In a later in-the-wild comparison ofthe appearance matching representations, the mirror enticedmore people to interact than the silhouette. The second strandcomes from a series of in-the-wild studies of a non-game, in-formation browsing interface with a kinesthetic-visual skele-ton representation [1, 2, 11]. When Tomitsch et al. [11] anal-ysed what the long-staying users actually did, they found thatmany played, often with the skeleton, rather than browse theinformation. They concluded that the skeleton evoked play-fulness. When Ackad et al. [1, 2] were studying how well theinterface supported learning the gestures, they also reportedconsiderable play.

Building on these two strands, this paper compares a sil-houette (appearance matching) and a skeleton (kinesthetic-visual) for their effect on playfulness, in an interface designedfor browsing hierarchical information. We chose the silhou-ette – similar to one condition in Muller et al. [7] – avoidingthe mirror representation which some people found discon-certing [7]. We also analyse the effect of the representationfor enticing users to interact, adding to the picture emergingfrom Muller et als work, but now comparing this, for the firsttime, with a kinesthetic-visual representation, rather than twoappearance matching representations.

Specifically, we tested the following hypotheses.

H1: Silhouette facilitates more serious interaction.This is suggested by a lab study [15] where people reported astrong task focus, seeing the silhouette only as an enabler.

H2: Skeleton is more playful, so facilitates more play.This is suggested the studies, described above, [1, 2, 11] re-porting playful behaviour, rather than serious browsing.

H3: Skeleton causes people to stay longer.This follows from the high rate of play among long-stayingusers at the browsing interface [11].

Page 2: Skeletons and Silhouettes: Comparing User Representations at a ...

Figure 1. Media Ribbon location on a glass wall (left), opposite a theatre (right).

Figure 2. Media Ribbon, with two users in skeleton condition, browsing through the second level of the hierarchy.

SYSTEM OVERVIEWThe Media Ribbon is a large public information display (1.2mby 4.2m). Figure 1 shows its location, on the glass wall ofa university building (left), opposite a public theatre (right),with a courtyard between and some pedestrian through traf-fic. It operates only at night, making the main population thetheatre-goers, a changing and varied population. The MediaRibbon was designed to support browsing through an hierar-chical information space, with information about the univer-sity and events at the theatre.

To help users realise the Media Ribbon is interactive, itdisplays the representation of detected users on screen andscrolls the display as they first walk by. When the user firstfaces up to the display, an interactive tutorial appears justabove their representation’s head. This guides them throughthe 4 mid-air gestures: Left and Right to move within the cur-rent level of the hierarchy; More and Back to move betweenthe hierarchy levels. Left and Right gestures require the userto swipe the respective hand across their body; the displaythen mirrors the hand movement to make it appear that theribbon follows the hand movement. More is triggered whenthe user raises either arm up. Back is triggered when the userswipes either arm down. (The startup tutorial also tells usershow to skip the tutorial, using the Back gesture.)

The tutorial is one of three aids to learning the gestures. Thealways-present icon tutorial, at the centre bottom, can be seenin Figures 2 and 3. As each gesture is triggered, it is high-lighted in blue on the icon-tutorial and gestures that are un-

available in the current state are greyed out. This can be seenfor the Left gesture in Figure 3; the user is at the end of thislevel of the hierarchy, so cannot move right.

The Media Ribbon, illustrated in Figure 2, appears as a hor-izontal ribbon of tiles, each with an image or video. In thefigure, the user is at the second level of the hierarchy. As theuser navigates within the current level of the hierarchy, a newtile moves to the centre. This tile expands to give extra infor-mation. In Figure 2, the tile image is at the left and additionalinformation has appeared at the right, as well as a reminderabout the gestures to navigate down into the hierarchy for thisitem, or back up. When the user is at a leaf node, this area in-forms them how to Like the current item.

We can display the user as either a silhouette (Figure 3 left)or a skeleton, an abstract joint representation (Figure 3 right).In both conditions, the user who is in control is displayed ina light grey; other detected users are displayed in a darkergrey. In Figure 2 the skeleton for the user in control has white”limbs”.

Figure 3. The user representations, silhouette (left) and skeleton (right).

Page 3: Skeletons and Silhouettes: Comparing User Representations at a ...

STUDY DESIGNWe designed the interface, taking care to ensure that both userrepresentations were as similar as possible: in terms of size,shape and colour and both showed a full body representation.Our in-the-wild study ran for periods in each condition. Thetiming had to account for weekly patterns of theatre traffic(e.g. more people on Thursday to Saturday) and weather (e.g.heavy rain). We needed the same number of sessions in eachcondition. A session is defined as starting when a person isfirst detected through their time at the display, often with oth-ers joining them, till the last user leaves. We pre-processeddata, excluding sessions where people were detected for lessthan a second as false positives. To compensate for temporarybreaks due to sensor resets or if someone walked in front ofsensor, we also combined some sessions.

For Hypotheses 1 and 2, we needed to analyse the interaction,assessing whether it was playful or serious browsing. To dothis, we captured video of both the user view of the displayand an untextured depth stream of people interacting with thedisplay, at 10 frames per second. For Hypothesis 3, we usedlogs data to determine how long people stayed at the display.In addition, we used this to determine conversion rates: com-paring counts of users who were detected against those whofaced the display; and from those who faced the display tothose who went on to interact with it. Our time-stamped logdata captured: the time of arrival and exit of users; the timeeach gesture was recognised, which user performed it; whenpeople faced the display.

We analysed all session videos where a user triggered a ges-ture. We coded them as Use and/or Play. Use included work-ing through the initial tutorial steps and browsing the content.Play was any interaction that the coder considered playful.Sessions could have both. A second coder reviewed 20% ofthe videos (randomly selected and balanced for each condi-tion). The Kappa agreement score was 0.947 for Play codingand 0.944 for Use coding.

RESULTSThe study ran from 19 June to 31 August 2015, with data for56 days (10 days were lost to bad weather and 7 to mainte-nance). Table 1 summarises the log data for the 483 peopledetected. To balance the number of sessions across condi-tions, the silhouette operated over 32 days giving 98 sessions,the skeleton over 24 days for 92 sessions. Row 3 shows thenumber and percentage of sessions where a person faced thescreen: 96% for silhouettes and 93% for skeletons. The nextrow shows the number and percentage of these who went onto trigger at least one gesture; this was significantly higher forsilhouettes (84%) than for skeletons (70%) (p=0.026).

The second last row in Table 1 shows that skeleton conditionhad longer sessions (t-test p=0.002) and the medians in thelast row are 14% longer for the skeleton. This support Hy-pothesis 3.

Video CodingWe now report on the analysis of playful versus serious use inthe 141 sessions where users interacted with the display (215minutes of interaction). Table 2 shows a summary. Only Used

Silhouette Skeletons

Users detected 241 242Sessions 98 92

Session with facing users 94 (96%) 86 (93%)Session with interacting users 79 (84%) 62 (70%)

For Interacting Users

Session Length (Mean, SD) 67.6s (68.4) 97.2s (123.09)Session Length (Median) 50.4s 57.4s

Table 1. Summary of log data. Percentages are relative to the row above.Bold indicates statistically significant differences between conditions.

means that the coder saw only serious use, doing the tutorialand browsing the content. This was the dominant session typefor both conditions. The row for Only Play was exclusivelyplay. Used and Play had a mix. An A/B split test of statisticalsignificance for these observations is reported below.

Silhouette Skeletons

Only Use 56 (70%) 34 (54%)Only Play 8 (10%) 12 (19%)

Used and Play 15 (19%) 16 (26%)

Total 79 62

Table 2. Summary of coded sessions, for play versus serious use (doingthe gesture tutorial or browsing). Bold indicates significant differencesbetween conditions.

UseTable 2 shows significantly more Use in the silhouette condi-tion, 70% compare with 54% for skeletons (p=0.024). Com-bining this with Used and Play sessions gives a total of 71(90%) silhouettes and 50 (79%) skeletons (p=0.064).

Analysis of the video for groups of two or more showed thatthe silhouette groups had more serious browsing behaviour,with users exploring the content and then handing over thedisplay to others. Similar behaviours were seen for thoseskeleton groups in the Only Use category.

PlayThe Only Play sessions were seen in 10% of silhouettes and19% of skeletons; this is not significant (p=0.064). However,the Only Play session lengths are significantly different: forthe silhouettes, they averaged 28.9 seconds (SD = 28.6) andfor the skeleton this was 88.1 seconds (SD = 58.8) (p=0.017).In addition, for all sessions with any Play at all (Only Play +Used and Play) the difference is significant, with a total of 23(29%) silhouettes and 28 (45%) skeletons (p=0.024).

We found that the playful behaviour was different across thetwo representations. Play in the silhouette condition appearedto be centred around using the gestures to manipulate thecontent (such as deliberate rapid back and forth movementsthrough the content) or to interfere with another group mem-ber’s interaction. We saw just one instance of people playingwith their representation. Play with the skeleton was verydifferent. For example, many people appeared to be manipu-lating the skeleton in a manner similar to puppeteering. Some

Page 4: Skeletons and Silhouettes: Comparing User Representations at a ...

tested the limits of its tracking capabilities. Others pattedanother user on the head or shoulder or recreated a boxingmatch.

Play and UseTable 2 shows a non-significant difference between the con-ditions for Play and Use (p=0.168). We analysed the orderof activity in these sessions and found that both conditionshad similar but lower levels of play first (5 in each condition).More complex patterns were analysed but did not reveal anyinteresting differences.

DISCUSSIONUser representations play important roles for large screen in-teractive displays: attracting attention to the display; indicat-ing it is interactive; and helping in gesture learning by provid-ing feedback on the gestures the user has attempted [1]. Thenature of the user representation has the potential to affect allthree of these. Our work now highlights that the representa-tion can also alter the way that the user relates to the interface.Previous work [11] suggested that a skeleton representationmight encourage play. That work was based on an analysisof just the longest sessions of interaction; so we conjecturedthat the more playful skeleton might result in longer sessions.We now return to our hypotheses.

H1: Silhouette facilitates more serious interaction.Walter et al [15] reported that people did not seem to find asilhouette remarkable and in our observations, users did notattempt to manipulate or play with it, as many did with theskeleton. Our coded sessions indicate that significantly moresessions in the silhouette condition involved purely serioususe (doing the tutorial and browsing the content) with seri-ous use in 70% of silhouette sessions compared with 54%of those for skeletons. In each of these silhouette sessions,we saw both individuals and groups interact with the displaywithout transitioning to play. Groups appeared to take turns,with on individual handing over control of the display to oth-ers. We also found a significant difference in the conversionrates, with the silhouettes giving 84% conversion from facingto interacting, compared with 70% for skeletons. Overall, ourdata supports the hypothesis. This suggests that the silhou-ette representation is more appropriate when designers intendtheir interface for serious interaction.

H2: Skeleton is more playful, so facilitates more play.The skeleton resulted in significantly more sessions with play(45% against 29% for the silhouette) supporting this hypoth-esis. Play with the skeleton was mostly puppeteering, to ma-nipulate the skeleton also reported by Tomitsch et al [11].This behaviour was not seen in the silhouette condition. Thisadds to the picture that the skeleton may be more appropriatefor interfaces designed for playfulness.

H3: Skeleton causes people to stay longer.The skeleton sessions were significantly longer. The sessionswith both play and serious use (19% for silhouettes and 26%for skeletons) contributed to the high proportion of sessionswith any serious use (89% for silhouette sessions and 80%for skeletons). This adds nuance to the results discussed sofar. While the skeleton appears to facilitate play, if a designer

is concerned about the level of serious use (with or withoutplay) the skeleton may still be quite effective.

LimitationsThe in-the-wild nature of our study introduced limitationsand affected the population of users. As the display is out-doors, our study was affected by weather conditions. Thetechnology limitations meant that the display operated onlyin evenings. We ran the silhouette condition first and then theskeleton. There were changing events at the theatre over thestudy duration. The videos indicate the population was adultbut we do not have further detailed data about the users. Thislimits the generalisability of our results to other contexts withother populations.

It is important to consider the effect of the novelty of largescreen public displays controlled by gesture interaction. Itis unclear whether our results will hold when such displaysbecome more common. If people come to experience many ofthese displays, and associated user representations, that maywell have a marked impact.

CONCLUSIONOur work is a partial replication of the lab study by Muller etal.’s [7]. However, our results are different from theirs. Whenthey compared the effect of two kinesthetic-visual and twoappearance matching representations, people noticed the lat-ter more quickly. This would lead us to expect the silhouetteto give a higher conversion rate from users detected to thosefacing the display. But in our study they were similar: 96%and 93%. In seems that in-the-wild, the effect reported byMuller et al. was not replicated.

Our work complements their in-the-wild study in three keyways. They compared two appearance matching representa-tions where we compared one of these (the silhouette) againstthe kinesthetic-visual skeleton. They considered the power ofthe representations to entice interaction but our focus was onthe nature of the interaction, playful or serious. Theirs wasa game interface and Media Ribbon was designed for a moreserious activity, browsing an hierarchical information space.Broadly, our work contributes to understanding of the effectsof the user representation for large screen interactive publicdisplays that are controlled by gesture interaction.

We have presented analyses of log data and video for 56 ac-tive days (over a 73 day period) from an in-the-wild study,comparing the effects of a skeleton and silhouette user rep-resentation on user retention, serious use of the browsing in-terface and playfulness. Our study confirmed all three of ourhypotheses: (H1) a silhouette representation leads to moreserious behaviour, (H2) the skeletal representation leads tomore play and (H3) the skeletal representation leads to longersessions. These findings can inform the design of the userrepresentation on large screen public displays.

ACKNOWLEDGMENTSThis research was supported by funding from the Smart Ser-vices CRC and the Faculty of Engineering and InformationTechnologies, The University of Sydney, under the FacultyResearch Cluster Program.

Page 5: Skeletons and Silhouettes: Comparing User Representations at a ...

REFERENCES1. Christopher Ackad, Andrew Clayphan, Martin Tomitsch,

and Judy Kay. 2015. An In-the-wild Study of LearningMid-air Gestures to Browse Hierarchical Information ata Large Interactive Public Display. In Proceedings of the2015 ACM International Joint Conference on Pervasiveand Ubiquitous Computing (UbiComp ’15). ACM, NewYork, NY, USA, 1227–1238. DOI:http://dx.doi.org/10.1145/2750858.2807532

2. Christopher Ackad, Rainer Wasinger, Richard Gluga,Judy Kay, and Martin Tomitsch. 2013. MeasuringInteractivity at an Interactive Public InformationDisplay. In Proceedings of the 25th AustralianComputer-Human Interaction Conference:Augmentation, Application, Innovation, Collaboration(OzCHI ’13). ACM, New York, NY, USA, 329–332.DOI:http://dx.doi.org/10.1145/2541016.2541091

3. Ville Makela, Tomi Heimonen, Matti Luhtala, andMarkku Turunen. 2014b. Information wall: evaluation ofa gesture-controlled public display. In Proceedings ofthe 13th International Conference on Mobile andUbiquitous Multimedia (MUM ’14). ACM, ACM, NewYork, NY, USA, 228–231. DOI:http://dx.doi.org/10.1145/2677972.2677998

4. Ville Makela, Tomi Heimonen, and Markku Turunen.2014a. Magnetic Cursor: Improving Target Selection inFreehand Pointing Interfaces. In Proceedings of TheInternational Symposium on Pervasive Displays (PerDis’14). ACM, New York, NY, USA, 112:112—-112:117.DOI:http://dx.doi.org/10.1145/2611009.2611025

5. Robert W Mitchell. 1993. Mental models ofmirror-self-recognition: Two theories. New Ideas inPsychology 11, 3 (1993), 295–325. DOI:http://dx.doi.org/10.1016/0732-118X(93)90002-U

6. Jorg Muller, Dieter Eberle, and Konrad Tollmar. 2014.Communiplay: a field study of a public displaymediaspace. In Proceedings of the 32nd annual ACMconference on Human factors in computing systems -CHI ’14. ACM, 1415–1424. DOI:http://dx.doi.org/10.1145/2556288.2557001

7. Jorg Muller, Robert Walter, Gilles Bailly, MichaelNischt, and Florian Alt. 2012. Looking Glass: A FieldStudy on Noticing Interactivity of a Shop Window. InProceedings of the SIGCHI Conference on HumanFactors in Computing Systems (CHI). ACM, New York,NY, USA, 297–306. DOI:http://dx.doi.org/10.1145/2207676.2207718

8. Gonzalo Parra, Robin De Croon, Joris Klerkx, and ErikDuval. 2014. Quantifying the Interaction Stages of aPublic Display Campaign in the Wild. In Proceedings ofthe 8th Nordic Conference on Human-ComputerInteraction: Fun, Fast, Foundational (NordiCHI ’14).ACM, New York, NY, USA, 757–760. DOI:http://dx.doi.org/10.1145/2639189.2639216

9. Garth Shoemaker, Anthony Tang, and Kellogg S Booth.2007. Shadow Reaching: A New Perspective on

Interaction for Large Displays. In Proceedings of the20th Annual ACM Symposium on User InterfaceSoftware and Technology (UIST ’07). ACM, New York,NY, USA, 53–56. DOI:http://dx.doi.org/10.1145/1294211.1294221

10. Maurice Ten Koppel, Gilles Bailly, Jorg Muller, andRobert Walter. 2012. Chained displays: Configurationsof Public Displays Can Be Used to Influence Actor-,Audience-, and Passer-By Behavior. In Proceedings ofthe 2012 ACM annual conference on Human Factors inComputing Systems - CHI ’12 (CHI ’12). ACM Press,New York, New York, USA, 317. DOI:http://dx.doi.org/10.1145/2207676.2207720

11. Martin Tomitsch, Christopher Ackad, Oliver Dawson,Luke Hespanhol, and Judy Kay. 2014. Who CaresAbout the Content? An Analysis of Playful Behaviour ata Public Display. In Proceedings of The InternationalSymposium on Pervasive Displays (PerDis ’14). ACM,ACM, New York, NY, USA, 160:160—-160:165. DOI:http://dx.doi.org/10.1145/2611009.2611016

12. Nina Valkanova, Robert Walter, Andrew Vande Moere,and Jorg Muller. 2014. MyPosition: sparking civicdiscourse by a public interactive poll visualization. InProceedings of the 17th ACM conference on Computersupported cooperative work & social computing -CSCW ’14. ACM, 1323–1332. DOI:http://dx.doi.org/10.1145/2531602.2531639

13. Robert Walter, Gilles Bailly, and Jorg Muller. 2013.StrikeAPose: Revealing Mid-air Gestures on PublicDisplays. In Proceedings of the SIGCHI Conference onHuman Factors in Computing Systems (CHI ’13). ACM,New York, NY, USA, 841–850. DOI:http://dx.doi.org/10.1145/2470654.2470774

14. Robert Walter, Gilles Bailly, Nina Valkanova, and JorgMuller. 2014. Cuenesics: using mid-air gestures toselect items on interactive public displays. InProceedings of the 16th international conference onHuman-computer interaction with mobile devices &services - MobileHCI ’14. ACM, 299–308. DOI:http://dx.doi.org/10.1145/2628363.2628368

15. Robert Walter, Andreas Bulling, David Lindlbauer,Martin Schuessler, and Jorg Muller. 2015. AnalyzingVisual Attention During Whole Body Interaction withPublic Displays. In Proceedings of the 2015 ACMInternational Joint Conference on Pervasive andUbiquitous Computing (UbiComp ’15). ACM, NewYork, NY, USA, 1263–1267. DOI:http://dx.doi.org/10.1145/2750858.2804255