MoodScope Mobisys pdf.pptx

Post on 02-Feb-2017

224 views 0 download

Transcript of MoodScope Mobisys pdf.pptx

MoodScope:  Sensing  mood  from  smartphone  usage  pa5erns

Robert Likamwa

Lin Zhong

Yunxin Liu

Nicholas D. Lane

Asia!

Mood-Enhanced Apps!

Social"ecosystems!

Media "recommendation!

Personal"analytics!

2 of 30!

Affective  Computing  (Mood  and  Emotion)  

Audio/Video-­‐‑based (AffectAura,  EmotionSense)

Biometric-­‐‑based (Skin  conductivity,  

Temperature,  Pulse  rate) Highly  temporal High  cost  of  deployment Hassle

Captures  expressions Power  hungry Slightly  invasive

3 of 30

Can your mobile phone infer your mood?

From already-available, low-power information?*

*  No  audio/video  sensing,  no  body-­‐‑instrumentation

MoodScope  ∈  Affective  Computing  

Audio/Video-­‐‑based

Usage  Trace-­‐‑based (MoodScope)

Biometric-­‐‑based Very  direct,  Fine-­‐‑grained High  cost  of  deployment

Captures  expressions Power  hungry Slightly  invasive

Passive,  Continuous How  to  model  mood?

5 of 30

Mood  is… •  … a persistent long-lasting state

o  Lasts hours or days

o  Emotion lasts seconds or minutes

•  … a strong social signal

o Drives communications

o Drives interactions

o Drives activity patterns

6 of 30

happy sad nervous depressed excited

relaxed calm

stressed bored

Circumplex model (Russell 1980)

a5entive

7 of 30

How is the user communicating?

What apps is the user using?

f  (                      )  =    mood usage

10 of 30

iPhone  Livelab  Logger •  Web history

•  Phone call history

•  Sms history

•  Email history

•  Location history

•  App usage

Adapted  From  C.  Shepard,  A.  Rahmati,  C.  Tossel,  L.  Zhong,  And  P.  Kortum,  "ʺLivelab:  Measuring  Wireless  Networks  And  Smartphone  Users  In  The  Field,"ʺ  In  Hotmetrics,  2010.

11 of 30

Runs as shell

Hash private data

Nightly uploads

Mood  Journaling  App

User-­‐‑base 32  users  aged  between  18  and  29

11  females 12 of 30

Inference •  Detect a mood pattern

•  Validate with only 60 days of data

•  Wide range of candidate usage data

•  Low computational resources

13 of 30

Daily  Mood  Averages

•  Separate pleasure,

activeness dimension

•  Take the average over a day _______________ 4

Σ(              )

14 of 30

Exploring  Features •  Communication

o SMS

o Email

o Phone Calls

•  To whom?

o # messages

o Length/Duration

Consider “Top 10” Histograms

How many phone calls were

made to #1? #2? … #10?

How much time was spent on

calls to #1? #2? … #10?

15 of 30

?

?

Exploring  Features •  Communication

o SMS

o Email

o Phone Calls

•  To whom?

o # messages

o Length/Duration

•  Usage Activity

o Applications

o Websites visited

o Location History

•  Which (app/site/location)?

o # instances

16 of 30

Previous  Mood •  Use previous 2 pairs of mood labels

17 of 30

Data  Type Histogram  by: Dimensions

Email  contacts #  Messages 10 #  Characters 10

SMS  contacts #  Messages 10 #  Characters 10

Phone  call  contacts #  Calls 10 Call  Duration 10

Website  domains #  Visits 10 Location  Clusters #  Visits 10

Apps #  App  launches 10 App  Duration 10

Categories  of  Apps #  App  launches 12 App  Duration 12

Previous  Pleasure  and  Activeness  Averages N/A 4

18 of 30

??

•  Multi-Linear Regression o  Minimize Mean Squared Error

•  Leave-One-Out Cross-Validation

•  Sequential Forward Feature Selection during training

19 of 30

Model  Design

Sequential  Feature  Selection

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1 6 11 16 21 26 31

Mean  Squared  Error

Number  of  Features  Used

Improvement  of  model  as  SFS  adds  more  features

(Each  line  is  a  different  user)

20 of 30

0

0.1

0.2

0.3

0.4

0.5

0.6

0.7

0.8

1 6 11 16 21 26 31

Mean  Squared  Error

Number  of  Features  Used

Improvement  of  model  as  SFS  adds  more  features

Sample  Prediction

2

3

4

0 10 20 30 40 50 60

Daily  Mood  Average

Days

Mood  (Pleasure) Estimated  Mood

21 of 30

Error  distributions •  Error2 of > 0.25 will

misclassify a mood label 93% < 0.25 error2

0.0001

0.001

0.01

0.1

1

Squared  Error

Users

90%ile

75%ile

22 of 30

vs.  Strawman  Models Models using full-knowledge of a user’s data with LOOCV

Model A: Assume User’s Average Mood

73% Accuracy

Model B: Assume User’s Previous Mood

61% Accuracy

MoodScope Training: 93% Accuracy.

23 of 30

Personalized  Training

0%

20%

40%

60%

80%

100%

10 20 30 40 50 59

Model  Accuracy

Training  Days

Incremental  personalized  model

24 of 30

All-­‐‑user    model  accuracy

Personalized/All-­‐‑user  Hybrid  Training

0%

20%

40%

60%

80%

100%

10 20 30 40 50 59

Model  Accuracy

Training  Days

Incremental  personalized  model

Hybrid  mood  model

25 of 30

Phone

Mood  Inputs/ Usage  Logs

Mood  and  Usage  History

Cloud

Mood  Model Mood  Model

Current  Usage Model  Training

Inferred  Mood

Resource-­‐‑friendly  Implementation

26 of 30

Inferred   Mood API

MoodScope:  Sensing  mood  from  smartphone  usage  pa5erns

•  Robustly (93%) detect each dimension of daily mood o  On personalized models

o  Starts out with 66% on generalized models

•  Validate with 32 users x 2 months worth of data

•  Simple resource-friendly implementation