Visualization and Interaction for Business and Entertainment Microsoft Engineering Excellence June 6...

Post on 14-Dec-2015

216 views 1 download

Tags:

Transcript of Visualization and Interaction for Business and Entertainment Microsoft Engineering Excellence June 6...

Visualization and Interaction for Business and Entertainment

MicrosoftEngineering Excellence

June 6

2006

From Scatterbrained to Focused: UI Support for Today’s Crazed Info Worker

Mary Czerwinski, Principal Researcher

Manager, VIBE, Microsoft Research

Overview

• Background Studies – Diary study– Large display findings

• Information worker productivity

• Programmer productivity and business intelligence

• Future directions

Diary Study: Motivation

• Hypothesis: Current software does not support multitasking well– How bad/universal is the problem?

• Seek SW design ideas…– Research shows users developing workaround strategies– Interruptions research shows harmful effects of incoming

notifications on current task– Memory for To Do’s poor, undersupported– Need to better understand task switching and multitasking

Method

• 10 multitasking users recruited

• An excel spreadsheet was used as a diary “template” to be filled out each day

• Diaries emailed back to me each evening

• Participants instructed to write down every “task switch”

– how hard to switch, # of docs required, # of interrupts experienced, task time, anything forgotten, notes, etc.

Partial diary for MS (6 hours)

Task Frequencies Breakdown

Frequency of Task Type

Downtime0%

Email23%

Meeting6%

Personal5%

Project18%

Routine Task27%

Telephone Call8%

Task Tracking13%

Indicative of Difficulty Tracking Tasks

“Returned to” Tasks from this group

Frequency of Task Shift Initiators

Frequency of Switch Causes

Email3%

Next Task19%

Self-Initiated40%

Telephone Call14%

Return to Task7%

Other Person1%

New Information Request

3%

Emergency1%

Appointment9%

Deadline2%

App Prompt1%

Difficulty Switching by Type

Rated Difficulty Sw itching to Task

0

1

2

3

Task Type

Dif

ficu

lty

Sw

itch

ing

(1=

Lo

w,

2=M

ed, 3

=Hig

h)

Other Tasks

Returned-to Tasks

Task Length by Type

Task Duration by Task Type

0

20

40

60

80

100

120

140

160

Task Type

Ave

rage

Tas

k D

urat

ion

(Min

s)

Other Tasks

Returned-to Tasks

Document Requirements by Task Type

Number of Documents by Task Type

0

0.5

1

1.5

2

2.5

3

Task Type

Ave

rage

# o

f Doc

s

Other Tasks

Returned-to Tasks

Number of Interruptions by Task Type

0

0.2

0.4

0.6

0.8

1

1.2

1.4

1.6

1.8

2

Task Type

Ave

rag

e N

um

ber

of

Inte

rru

pti

on

s

Other Tasks

Returned-to Tasks

Interruptions by Task Type

Discussion of Findings

• During a given week, KWs task shift an awful lot (avg. 10 task shifts a day)

• Long-term projects are more complex shifts– Lengthier (11.25% of the week), more documents,

interrupts, “returns”

– Rated significantly harder to return to

• Negative influence of interrupts on multitask performance and memory well known

• Passage of time also takes its toll

• What designs will help?

General Design Ideas from Participants

• Smarter, adjustable To Do list tracking & alarming

– In the projects versus just in Calendar

– Consider sticky notes for partial / future tasks

• Auto-categorization of email and files

• Better reminders for things forgotten

– Track events we know about and visualize them, or rely on user manual tagging

• Better user adaptivity

– e.g., knowing what kinds of paste operations a user typically performs and automating them

Focus on Returned to Tasks

• Elapsed time spanned hours to days

• Maintaining desktop state isn’t always the answer– Often, users said they were waiting on info from

other people or places (web, server)—prospective reminders needed here

– Info came in via phone, email, web, or personal contacts (better app integration needed here)

– But reminding about task context and info assembly / layout was a key problem identified

About the same time…Large Display Findings• Started exploring how user behavior changes as

displays increase in size and resolution

• Found that users were significantly more productive when performing knowledge work (multitasking, task switching) with large displays

• Less window management=less cognitive load

• But still needed help with task management

Tools for Task Management

• GroupBar joins related items in the taskbar, remembers spatial layouts of tasks (Smith et al., 2003)– Desktop “snapshots”– Can “rehydrate” tasks with the

press of a button

• Scalable Fabric and VibeLog (AVI 2004)– Over 5000 downloads of SF– Logging of task activity

Color Plate 1. Scalable Fabric showing the representation of three tasks as clusters of windows, and a single window being dragged

from the focus area into the periphery.

New iWorker Productivity Solutions

• Task Tracking

• Event logging: StatusWriter

• Dev team navigation tracking

• FacetMap and FaThumb

• Sensing and adapting

Visualization and Interaction for Business and Entertainment

MicrosoftEngineering Excellence

June 6

2006

Swish: Semantic Analysis of Window Titles and Switching History

Nuria Oliver, Greg Smith, Chintan Thakkar, Arun Surendran

Automatic Window Clustering

• Goal:• Assist users in managing their tasks

• Assumption: • Windows belonging to the same task share some

common features that can be identified from data

• In SWISH we explore:• Title-based clustering (“Re: Dad, get me Potter-6” &

“Amazon: Harry Potter and the Half Blood Prince” )

• Behavior-based (switching history) clustering (Looking into the MSDN Library while coding in Visual Studio)

Harry PotterBook

DonnaReviewMalayeri

ExpediaFlightTrip

Clu

ster

1C

lust

er 2

Clu

ster

3A few exemplary titles Top 3

keywords

Applications

• GroupBar– Automatic or semi-automatic clustering– Automatic keyword extraction labels for the groups

• Implicit Query: – Display relevant information to the current window

• Automatic Window Clean-up Application– Users open dozens of Explorer windows, and are too

lazy to kill them– Collapse unused, unrelated windows to a single

cluster– Provide an option after a timeout, to kill all together

StatusWriter

• Automatic status report writer

• View time spent by app/doc

• View by day, week, month, etc.

Status Writer Continued

• Can also view by day

• Exports text info to Excel for further analysis

• Future version to include– Calendar– Tagging– SWISH++

Visualization and Interaction for Business and Entertainment

MicrosoftEngineering Excellence

June 6

2006

Clipping Lists and Change Borders

Peripheral Information Display

Tara Matthews, Mary Czerwinski, George Robertson, and Desney Tan

Why Would Abstraction in Peripheral Information Help?• Imagine…

– You are balancing 5 tasks– You have 18 windows open on your desktop– You are waiting on the next draft of a paper, code to

be checked in to CVS, and an email

• You want to know…when should you switch back to a task?

…when you switch tasks, what were you working on?

…when new info arrives, can you safely ignore it?

Study of Proposed Solutions:Clipping Lists and Change Borders• Compare interfaces w/ varying types of abstraction

– All interfaces based on Scalable Fabric (SF)

• Abstraction types:– Change detection– Semantic content extraction

• 4 interfaces:

SF Semantic Content Extraction (Clippings)

SF + Change Detection Semantic Content Extraction + Change Detection

Baseline: Scalable Fabric

• Tasks as piles

• Windows shrunken

SF Clippings

SF + Change Detection

Clippings +

Change Detection

Change Borders

• Adds red borders around windows changing content

• Border turns green when change is complete

SF List

SF + Change Detection

List + Change Detection

Red Change Border Green Change BorderRed Change Border Green Change Border

Clipping Lists

• Extracts window content

• Two ways to select content– Default: title bar– User WinCut– Future: AI

• Goal of selection:– Help w/ recognition,

resumption timing, and flow

SF List

SF + Change Detection

List + Change Detection

Clipping Lists + Change Borders

• Extracts window content

• Adds green highlight to task boundary & windows that have changed

SF List

SF + Change Detection

List + Change Detection

Study Results

• Semantic content extraction (Clipping Lists)– Is more effective

than both change detection and scaling

– Significantly benefits:

• Task flow• Resumption timing• Reacquisition

Average Task Times

540

560

580

600

620

640

660

680

700

Ave

rage

Tim

e in

Sec

onds

SF Clippings + Change

ClippingsSF + Change

Average Task Times

540

560

580

600

620

640

660

680

700

Ave

rage

Tim

e in

Sec

onds

SF Clippings + Change

ClippingsSF + Change

Average Time to Resume Quiz

0

10

20

30

40

50

60

70

80

90

Ave

rag

e T

ime

in S

eco

nds

SF Clippings + Change

ClippingsSF + Change

Average Time to Resume Quiz

0

10

20

30

40

50

60

70

80

90

Ave

rag

e T

ime

in S

eco

nds

SF Clippings + Change

ClippingsSF + Change

Programmer Productivity: Team Tracks

• We have observed devs struggling with unfamiliar code– Inefficient navigation to find task-relevant code– Misleading results of text searches– Disorientation from too much navigation, too many

open files, interruptions– [DeLine, Khella, Czerwinski, Robertson SoftVis

’05], [Ko, Aung, Myers ICSE ’05]

• Team Tracks guides code exploration– Records the team’s code navigation during

development– Mines that data to prune the working set and guide

navigation

Evaluating Team Tracks

• Study 1: Does nav frequency indicate importance?– Setup: Four programming tasks, then ratings

questionnaire and quiz– Dependent measures: code paths, task completion,

ratings, quiz scores– Hypothesis: Navigation frequency correlates to

importance rating [reported at SoftVis ’05]

• Study 2: Does Team Tracks improve productivity?– Use Team Tracks with Group 1’s navigation data– Same set up and dependent measures– Hypothesis: Team Tracks improves task completions

and quiz scores

Navigation frequency does correlate with importance ratings

• Pearson product moment correlation, r=0.79, p<0.01

Team Tracks does improve task completion rates and quiz scores

•Improved task completion rates–All completed tasks 1 and 2–Task 3 (localized code): 1 / 7 without, 3 / 9 with Team Tracks–Task 4 (dispersed code): 1 / 7 without, 7 / 9 with Team Tracks

•Group 2 quiz scores significantly higher t(16)=-2.04, p<.03

–IE 8.0 team deployment ethnography next

–Added annotations and other features

Visualization and Interaction for Business and Entertainment

MicrosoftEngineering Excellence

June 6

2006

FaThumbA Facet-based Interface for Mobile Search

Amy K. Karlson (U of Maryland), George Robertson, Daniel C. Robbins, Mary Czerwinski, Greg Smith

VIBE Group

Microsoft Research

FaThumb: Overview + Video

• Keypad is least-common-denominator– Cell-phone– Remote control– ATM– Number key-pad

• Typing text is hard

• Let users browse data attributes taxonomy (facets)

ResultsResults

Facet Navigation

Facet Navigation

MenuMenu

Current QuerySearch Terms

Current QuerySearch Terms

Standard keypad

Standard keypad

FacetMap-Faceted Search of all your stuff

Sensing

• HealthGear

• Brain-computer interaction

Future Directions

• Information worker productivity– Intelligent summaries and visualizations of tasks

• IR & Info Vis– FacetMap– FaThumb on SmartPhone– Info vis toolkit prototype in January– Rich desktop search client

• Interaction techniques– Adaptive UI: study predictability– Other, step-based UIs

• Sensing– HealthGear: whole new line of research and networking to be

done– BCI: actually use it while running real applications

Visualization and Interaction for Business and Entertainment

MicrosoftEngineering Excellence

June 6

2006

Thank you for your attention!