Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London...

50
When using wearable cameras in ethnography, we should analyse the data in 2 stages: 1) Iden<fy episodes/ac<vi<es 2) Categorise their behavioural type & context British Heart Foundation Health Promotion Research Group Aiden Doherty’s talk in 1 slide...

description

Invited talk given at London School of Economics on Friday 10th May 2013. This was part of a seminar series on "First Person Perspective Digital Ethnography". New methods of digital data capture create new problems for analysing data. How should the sheer volume of data be stored, searched and analysed? How can multiple types of first person perspective data (e.g., video, audio, location, movement, eye-tracking, biosensor) be integrated and analysed? What software platforms currently support such a diversity of data and perspectives?

Transcript of Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London...

Page 1: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

When  using  wearable  cameras  in  ethnography,  

we  should  analyse  the  data  in  2  stages:  

 

1)  Iden<fy  episodes/ac<vi<es  

2)  Categorise  their  behavioural  type  &  context  

British Heart Foundation Health Promotion Research Group

Aiden Doherty’s talk in 1 slide...

Page 2: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Microsoft Early career in computer science...

Page 3: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

SenseCam Hodges, Ubicomp conf., 2006

My interest is now in public health...

Something from department? Maybe Charlie Cochrane reviews? Maybe

Paul WHO document? Maybe Heart Stats?

Page 4: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

This talk reflects the work of many...

Page 5: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

The Lancet 2013, 380(9859), pp. 2095-2128

Global burden of disease...

Use recent Lancet website

Page 6: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

The Lancet 2013, 380(9859), pp. 2095-2128

Main diseases are lifestyle related...

Highlight main diseases are lifestyle related

Page 7: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Sallis 2000, Ann. Behav. Med. 22(4):294-298

Behavioural epidemiology framework...

Sallis & Owen, highlight measurement

Page 8: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Troiano; Med Sci Sport Exer 2008; 40(1):181-8 Craig; NCSR 2008 – Health Survey for England

There is a big measurement problem...

Self report : 50% Accelerometer: 5%

Self-report : 38% Accelerometer: 5%

% adults meeting physical activity recommendations

Page 9: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Kelly, IJBNPA 2011, 8:44 Armstrong, Sprts Med 06, 36:1067–1086

Self-report has limitations... Recall error...

Comprehension problems... Social desirability error...

Page 10: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

SenseCam Hodges, Ubicomp conf., 2006

Current objective measures limitations...

Show acc != behaviour slide … also GPS/GIS slides too

Page 11: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

SenseCam Hodges, Ubicomp conf., 2006

Wearable cameras identify behaviours...

Video of wearable camera data

Page 12: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Mann, Computer 1997, 30; 25-32 Tano, ICME conf 2006; 649-652

Past efforts focused on hardware storage and miniaturisation...

Page 13: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Vicon Autographer

New devices are now smaller...

Page 14: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Memoto

New devices are now smaller...

Page 15: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Google Glass

Big companies like Google are now producing wearable cameras...

Page 16: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Gurrin 2013; Am J Prev Med 44(3):308-313

Smartphones too...

Page 17: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

SenseCam

SenseCam Hodges, Ubicomp conf., 2006

Microsoft’s SenseCam is the most popular in health research...

Page 18: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

- Active travel - Sedentary behaviour - Nutrition - Environment exposures - Contextualising other sensors

Doherty 2013, Am J Prev Med 43(5), 320 – 323

Wearable cameras in health...

Page 19: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Kelly, Intl J Behav Nutr Phys Act 2011, 8:44

Active travel – U.K. NTS diary...

Page 20: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Kelly, Intl J Behav Nutr Phys Act 2011, 8:44

Journey time = 20 minutes

Active travel – self-report entry...

Page 21: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Kelly, Intl J Behav Nutr Phys Act 2011, 8:44

journey time = 12 min 38 sec Active travel – wearable camera entry...

Page 22: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

All journeys +2 min 34 sec (S.E. 32 sec)

Car  +2  min  08  sec  (S.E.  60  sec)  

Walk  +1  min  41  sec  (S.E.  45  sec)  

Bike  +4  min  33  sec  (S.E.  64  sec)  

Kelly, Intl J Behav Nutr Phys Act 2011, 8:44

Active travel – adults’ self-report error...

Page 23: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

test

Then show Paul Journey2School...

Kelly, 2012, Am J Prev Med 43(5), 546 – 550

Self-report error in adolescents' travel...

Page 24: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

test

Then show Paul Journey2School...

Kerr, 2013, Am J Prev Med 44(3), 290 – 296

Sedentary behaviour...

Page 25: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

test

Then show Paul Journey2School...

Kerr, 2013, Am J Prev Med 44(3), 290 – 296

Sedentary behaviour... Images were classifıed as “un-codeable” when the cameralens was obstructed.

Data AnalysisSenseCam codes were aggre-gated to the minute level. Avalid minute was defıned ashaving the same posture andactivity codes for the fırst andlast image within the minute.SenseCam minute-level andaccelerometer minute-leveldata were merged using thetime-stamp data for each unit.Time spent in sedentary be-havior postures and activitiesas coded by the SenseCamwere compared to the classifı-cation from the accelerome-ter. Minutes in each behavior,minutes under the 100-cpmthreshold, and mean countsin each behavior type werecalculated. The sensitivity andspecifıcity of the 100-cpm cut-point was compared with theSenseCam-derived classifıca-tion of sedentary behavior, calculated for the whole data set andeach participant. All analyses were performed using SPSS,version 19.

ResultsA total of 40 participants completed the study. Each par-ticipant contributed a mean of 4 days of data; 70% weremale; more than one third reported an annual householdincome of!$100,000; average agewas 36 years (SD!12);and 85% were Caucasian. A total of 170 days were coded,including 86,109 valid minutes and 364,841 images. Atotal of 8546 minutes of images (8.2%) were classifıed as“uncodeable,” such as when the camera lens was ob-scured by an item of clothing or body part. It took trainedcoders approximately 2 hours to code 1 day of data in-cluding the six postures and 12 activity types. Participantswere compliant with the wear instructions and did notreport any problems with the devices. Several partici-pants deleted a small number of images in the reviewprocess. Many participants were fascinated by the tech-nology and research applications.

The coded image data were compared to the accelero-meter data using the 100-cpm threshold. On average, theaccelerometer cutpoint classifıed participants in seden-tary behavior for 331.2minutes per day (SD!135.8). Theaverage minutes coded as sedentary per day from theSenseCam imageswas 302.2 (SD!130.0). Themajority ofminutes were spent in a sitting posture (Table 1).

The accelerometer 100 cpm correctly identifıed “sit-ting” 90% of the time. However, when the SenseCamimage indicated “standing without movement” or“standing with movement,” the accelerometer recorded"100 cpm 72% and 35% of the time, respectively. Elevenpercent of SenseCam bicycling had "100 cpm on theaccelerometer. Overall, the sensitivity and specifıcity forthe whole data set were 90% and 67%, respectively. Themean sensitivity across participants was 89% (SD!7%),and the mean specifıcity was 69% (SD!11%).

Table 2 presents the various activities that occurredwhen the SenseCam images were coded as “sitting.” Themost-prevalent behavior categories, as determined by thenumber of minutes recorded by the SenseCam, were(1) other screen use; (2) administrative activity; (3) TVwatching; (4) eating; and (5) riding in a car. TV viewingconstituted 11% of the total sedentary time, compared withother screen use, which made up 46% of total observedsedentary time. The accelerometer cutpoint of 100 cpmwasaccurate almost 90% of the time in classifying each of thesefour individual behaviors. At least 26%of the time observedin a car had accelerometer counts#100 cpm.

DiscussionThis study is the fırst to assess the convergent validity of awidely used accelerometer-based cutpoint for classifyingsedentary behavior over several days of free-living behav-ior in adults using a discrete observation method. The

Table 2. Minutes of coded sedentary posture from Microsoft’s SenseCam by activitycategory

Image code Minutes

Percent timein

accelerometercpm "100

Interquartilerange of

accelerometercpm

Meanaccelerometer

cpm

Sports 0 — 1525–2900 2330

Self care 85 60 2–361 284

Manual labor 202 35 41–669 432

Conditioning exercise 230 21 85–1405 1262

Household activity 244 58 10–305 260

Riding in other vehicle 409 82 0–50 90

Leisure 428 81 0–73 91

Riding in car 4,653 74 8–103 101

Eating 5,250 92 0–4 54

TV watching 5,407 89 0–11 46

Administrative activity 9,546 92 0–10 55

Other screen use 22,881 93 0–0 34

cpm, counts per minute

4 Kerr et al / Am J Prev Med 2013;xx(x):xxx

www.ajpmonline.org

Page 26: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

test

Then show Paul Journey2School...

Kerr, 2013, Am J Prev Med 44(3), 290 – 296

Sedentary behaviour... Images were classifıed as “un-codeable” when the cameralens was obstructed.

Data AnalysisSenseCam codes were aggre-gated to the minute level. Avalid minute was defıned ashaving the same posture andactivity codes for the fırst andlast image within the minute.SenseCam minute-level andaccelerometer minute-leveldata were merged using thetime-stamp data for each unit.Time spent in sedentary be-havior postures and activitiesas coded by the SenseCamwere compared to the classifı-cation from the accelerome-ter. Minutes in each behavior,minutes under the 100-cpmthreshold, and mean countsin each behavior type werecalculated. The sensitivity andspecifıcity of the 100-cpm cut-point was compared with theSenseCam-derived classifıca-tion of sedentary behavior, calculated for the whole data set andeach participant. All analyses were performed using SPSS,version 19.

ResultsA total of 40 participants completed the study. Each par-ticipant contributed a mean of 4 days of data; 70% weremale; more than one third reported an annual householdincome of!$100,000; average agewas 36 years (SD!12);and 85% were Caucasian. A total of 170 days were coded,including 86,109 valid minutes and 364,841 images. Atotal of 8546 minutes of images (8.2%) were classifıed as“uncodeable,” such as when the camera lens was ob-scured by an item of clothing or body part. It took trainedcoders approximately 2 hours to code 1 day of data in-cluding the six postures and 12 activity types. Participantswere compliant with the wear instructions and did notreport any problems with the devices. Several partici-pants deleted a small number of images in the reviewprocess. Many participants were fascinated by the tech-nology and research applications.The coded image data were compared to the accelero-

meter data using the 100-cpm threshold. On average, theaccelerometer cutpoint classifıed participants in seden-tary behavior for 331.2minutes per day (SD!135.8). Theaverage minutes coded as sedentary per day from theSenseCam imageswas 302.2 (SD!130.0). Themajority ofminutes were spent in a sitting posture (Table 1).

The accelerometer 100 cpm correctly identifıed “sit-ting” 90% of the time. However, when the SenseCamimage indicated “standing without movement” or“standing with movement,” the accelerometer recorded"100 cpm 72% and 35% of the time, respectively. Elevenpercent of SenseCam bicycling had "100 cpm on theaccelerometer. Overall, the sensitivity and specifıcity forthe whole data set were 90% and 67%, respectively. Themean sensitivity across participants was 89% (SD!7%),and the mean specifıcity was 69% (SD!11%).Table 2 presents the various activities that occurred

when the SenseCam images were coded as “sitting.” Themost-prevalent behavior categories, as determined by thenumber of minutes recorded by the SenseCam, were(1) other screen use; (2) administrative activity; (3) TVwatching; (4) eating; and (5) riding in a car. TV viewingconstituted 11% of the total sedentary time, compared withother screen use, which made up 46% of total observedsedentary time. The accelerometer cutpoint of 100 cpmwasaccurate almost 90% of the time in classifying each of thesefour individual behaviors. At least 26%of the time observedin a car had accelerometer counts#100 cpm.

DiscussionThis study is the fırst to assess the convergent validity of awidely used accelerometer-based cutpoint for classifyingsedentary behavior over several days of free-living behav-ior in adults using a discrete observation method. The

Table 2. Minutes of coded sedentary posture from Microsoft’s SenseCam by activitycategory

Image code Minutes

Percent timein

accelerometercpm "100

Interquartilerange of

accelerometercpm

Meanaccelerometer

cpm

Sports 0 — 1525–2900 2330

Self care 85 60 2–361 284

Manual labor 202 35 41–669 432

Conditioning exercise 230 21 85–1405 1262

Household activity 244 58 10–305 260

Riding in other vehicle 409 82 0–50 90

Leisure 428 81 0–73 91

Riding in car 4,653 74 8–103 101

Eating 5,250 92 0–4 54

TV watching 5,407 89 0–11 46

Administrative activity 9,546 92 0–10 55

Other screen use 22,881 93 0–0 34

cpm, counts per minute

4 Kerr et al / Am J Prev Med 2013;xx(x):xxx

www.ajpmonline.org

Images were classifıed as “un-codeable” when the cameralens was obstructed.

Data AnalysisSenseCam codes were aggre-gated to the minute level. Avalid minute was defıned ashaving the same posture andactivity codes for the fırst andlast image within the minute.SenseCam minute-level andaccelerometer minute-leveldata were merged using thetime-stamp data for each unit.Time spent in sedentary be-havior postures and activitiesas coded by the SenseCamwere compared to the classifı-cation from the accelerome-ter. Minutes in each behavior,minutes under the 100-cpmthreshold, and mean countsin each behavior type werecalculated. The sensitivity andspecifıcity of the 100-cpm cut-point was compared with theSenseCam-derived classifıca-tion of sedentary behavior, calculated for the whole data set andeach participant. All analyses were performed using SPSS,version 19.

ResultsA total of 40 participants completed the study. Each par-ticipant contributed a mean of 4 days of data; 70% weremale; more than one third reported an annual householdincome of!$100,000; average agewas 36 years (SD!12);and 85% were Caucasian. A total of 170 days were coded,including 86,109 valid minutes and 364,841 images. Atotal of 8546 minutes of images (8.2%) were classifıed as“uncodeable,” such as when the camera lens was ob-scured by an item of clothing or body part. It took trainedcoders approximately 2 hours to code 1 day of data in-cluding the six postures and 12 activity types. Participantswere compliant with the wear instructions and did notreport any problems with the devices. Several partici-pants deleted a small number of images in the reviewprocess. Many participants were fascinated by the tech-nology and research applications.

The coded image data were compared to the accelero-meter data using the 100-cpm threshold. On average, theaccelerometer cutpoint classifıed participants in seden-tary behavior for 331.2minutes per day (SD!135.8). Theaverage minutes coded as sedentary per day from theSenseCam imageswas 302.2 (SD!130.0). Themajority ofminutes were spent in a sitting posture (Table 1).

The accelerometer 100 cpm correctly identifıed “sit-ting” 90% of the time. However, when the SenseCamimage indicated “standing without movement” or“standing with movement,” the accelerometer recorded"100 cpm 72% and 35% of the time, respectively. Elevenpercent of SenseCam bicycling had "100 cpm on theaccelerometer. Overall, the sensitivity and specifıcity forthe whole data set were 90% and 67%, respectively. Themean sensitivity across participants was 89% (SD!7%),and the mean specifıcity was 69% (SD!11%).

Table 2 presents the various activities that occurredwhen the SenseCam images were coded as “sitting.” Themost-prevalent behavior categories, as determined by thenumber of minutes recorded by the SenseCam, were(1) other screen use; (2) administrative activity; (3) TVwatching; (4) eating; and (5) riding in a car. TV viewingconstituted 11% of the total sedentary time, compared withother screen use, which made up 46% of total observedsedentary time. The accelerometer cutpoint of 100 cpmwasaccurate almost 90% of the time in classifying each of thesefour individual behaviors. At least 26%of the time observedin a car had accelerometer counts#100 cpm.

DiscussionThis study is the fırst to assess the convergent validity of awidely used accelerometer-based cutpoint for classifyingsedentary behavior over several days of free-living behav-ior in adults using a discrete observation method. The

Table 2. Minutes of coded sedentary posture from Microsoft’s SenseCam by activitycategory

Image code Minutes

Percent timein

accelerometercpm "100

Interquartilerange of

accelerometercpm

Meanaccelerometer

cpm

Sports 0 — 1525–2900 2330

Self care 85 60 2–361 284

Manual labor 202 35 41–669 432

Conditioning exercise 230 21 85–1405 1262

Household activity 244 58 10–305 260

Riding in other vehicle 409 82 0–50 90

Leisure 428 81 0–73 91

Riding in car 4,653 74 8–103 101

Eating 5,250 92 0–4 54

TV watching 5,407 89 0–11 46

Administrative activity 9,546 92 0–10 55

Other screen use 22,881 93 0–0 34

cpm, counts per minute

4 Kerr et al / Am J Prev Med 2013;xx(x):xxx

www.ajpmonline.org

Page 27: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Arab, Eu J Clin Nutr 2011;65(10):1156–62 O’Loughlin 2013 Am J Prev Med 44(3), 297 – 301

Nutrition...

Page 28: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

O’Loughlin 2013 Am J Prev Med 44(3), 297 – 301

Nutrition...

diary that matched the information obtained from theSenseCam.Mean total calorie intake amounts using diary alone

when compared with the combination of the diaryplus SenseCam were as follows: 2349!827.9 kcal vs2631!893.4 kcal for the trainee jockeys; 2600!521.9 kcalvs 3190!770.2 kcal for the Gaelic footballers; and2237!318.5 kcal vs 2487!404.6 kcal for the universitystudents (Figure 2). Differences among measurementmethods of 10.7% (p!0.001); 17.7% (p!0.001); and10.1% (p!0.01) were reported for the trainee jockeys,Gaelic footballers, and university students, respectively(Table 1).

DiscussionThe results of this initial study suggest that the SenseCammay provide useful benefıts in terms of augmenting estab-lished techniques for recording energy intake. The imagesobtained from the SenseCam provide additional informa-tion regarding dietary intake patterns and highlighted a sig-nifıcant under-reporting of calorie intake, ranging from

10% to 18% in the popula-tions studied. To the au-thors’ knowledge, no pre-vious research has beenpublished examining thepotential use of the Sense-Cam for dietary assess-ment. The current studythus stands as a proof ofconcept, indicating the po-tential value of the Sense-Cam in dietary analysis.The automatic nature

of the SenseCam offersmany advantages over

studies using a user-activated camera to log foods. Re-questing that individuals manually photograph before–after images ofmeals is dependent on subject compliance,is potentially intrusive, and requires the individual toremember not only the camera but also to take photo-graphs at appropriate times. Previous studies13–15,24 havereported no general advantage in terms of accuracy inusing a conventional camera to capture dietary intakeover food diaries because of participant burden and in-complete photographic food records.The process of recording food consumption has been

shown previously to influence habitual dietary practicesto the extent that alternate choices may be made, as par-ticipants are aware that their diary will be analyzed.25During the current study, each subject wore the Sense-Cam and simultaneously completed a food diary for a1-day period. When conducting dietary analysis overmore extended periods, the SenseCammay becomemorevaluable because of decreased enthusiasm for diary log-ging by participants over time. Previous studies pub-lished by the current authors’ research group have used a7-day food diary to gain insight into the dietary practicesof jockeys. However, as with all dietary assessment meth-ods, under-reporting was evident.21 Use of the SenseCamprovides a mechanism to check the accuracy of the diaryand, because it works automatically, is not subject toreporting biases.

Figure 1. Images from the Microsoft SenseCam showing food consumed by the subjectNote: (a) an individual’s dinner, allowing the extra condiment about to be put on the plate to beremembered; (b) a breakfast image allowing questions regarding portion size

4000

3500 C

alor

ie in

take

3000

2500

2000

1500

1000

0 500

UniversityTraineejockeys

Gaelicfootballers students

Diary alone Diary and SenseCam

****

*

Figure 2. Comparison of energy intake using two assess-ment methods: diary alone, and diary with MicrosoftSenseCam*p!0.01, **p!0.001

Table 1. Mean difference in energy intake between twodietary assessment methods, M!SD

Mean difference(kcal)

Mean difference(%)

Trainee jockeys 282!164 10.7**

Gaelic footballers 591!304 17.7**

University students 250!216 10.1*

*p!0.01,**p!0.001

O’Loughlin et al / Am J Prev Med 2013;xx(x):xxx 3

Month 2013

Page 29: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

O’Loughlin 2013 Am J Prev Med 44(3), 297 – 301

Nutrition...

diary that matched the information obtained from theSenseCam.

Mean total calorie intake amounts using diary alonewhen compared with the combination of the diaryplus SenseCam were as follows: 2349!827.9 kcal vs2631!893.4 kcal for the trainee jockeys; 2600!521.9 kcalvs 3190!770.2 kcal for the Gaelic footballers; and2237!318.5 kcal vs 2487!404.6 kcal for the universitystudents (Figure 2). Differences among measurementmethods of 10.7% (p!0.001); 17.7% (p!0.001); and10.1% (p!0.01) were reported for the trainee jockeys,Gaelic footballers, and university students, respectively(Table 1).

DiscussionThe results of this initial study suggest that the SenseCammay provide useful benefıts in terms of augmenting estab-lished techniques for recording energy intake. The imagesobtained from the SenseCam provide additional informa-tion regarding dietary intake patterns and highlighted a sig-nifıcant under-reporting of calorie intake, ranging from

10% to 18% in the popula-tions studied. To the au-thors’ knowledge, no pre-vious research has beenpublished examining thepotential use of the Sense-Cam for dietary assess-ment. The current studythus stands as a proof ofconcept, indicating the po-tential value of the Sense-Cam in dietary analysis.

The automatic natureof the SenseCam offersmany advantages over

studies using a user-activated camera to log foods. Re-questing that individuals manually photograph before–after images ofmeals is dependent on subject compliance,is potentially intrusive, and requires the individual toremember not only the camera but also to take photo-graphs at appropriate times. Previous studies13–15,24 havereported no general advantage in terms of accuracy inusing a conventional camera to capture dietary intakeover food diaries because of participant burden and in-complete photographic food records.

The process of recording food consumption has beenshown previously to influence habitual dietary practicesto the extent that alternate choices may be made, as par-ticipants are aware that their diary will be analyzed.25During the current study, each subject wore the Sense-Cam and simultaneously completed a food diary for a1-day period. When conducting dietary analysis overmore extended periods, the SenseCammay becomemorevaluable because of decreased enthusiasm for diary log-ging by participants over time. Previous studies pub-lished by the current authors’ research group have used a7-day food diary to gain insight into the dietary practicesof jockeys. However, as with all dietary assessment meth-ods, under-reporting was evident.21 Use of the SenseCamprovides a mechanism to check the accuracy of the diaryand, because it works automatically, is not subject toreporting biases.

Figure 1. Images from the Microsoft SenseCam showing food consumed by the subjectNote: (a) an individual’s dinner, allowing the extra condiment about to be put on the plate to beremembered; (b) a breakfast image allowing questions regarding portion size

4000

3500

Cal

orie

inta

ke

3000

2500

2000

1500

1000

0 500

UniversityTraineejockeys

Gaelicfootballers students

Diary alone Diary and SenseCam

****

*

Figure 2. Comparison of energy intake using two assess-ment methods: diary alone, and diary with MicrosoftSenseCam*p!0.01, **p!0.001

Table 1. Mean difference in energy intake between twodietary assessment methods, M!SD

Mean difference(kcal)

Mean difference(%)

Trainee jockeys 282!164 10.7**

Gaelic footballers 591!304 17.7**

University students 250!216 10.1*

*p!0.01,**p!0.001

O’Loughlin et al / Am J Prev Med 2013;xx(x):xxx 3

Month 2013

Page 30: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Arab, Eu J Clin Nutr 2011;65(10):1156–62 O’Loughlin 2013 Am J Prev Med 44(3), 297 – 301

Assessment of environment features...

that individuals may encounter during walking or cyclingjourneys. Although the current dataset was derived from alimited number of participants and geographic area, allhypothesised features of importance identified from theaudit tools were identified from the images captured. Thetendency to under-report journey duration is in contrastwith previous SenseCam research [22,32] and Global Posi-tioning System (GPS) studies [33], possibly due the smallconvenience sample and focus on work related walkingand cycling trips only. We found significant differences inthe presence of specific features between walking andcycling modes, suggesting preliminary support for the con-tent validity of this approach. For example, a significantlygreater proportion of footpaths, pedestrians, and pedes-trian crossings were found for walking trips, while a higher

prevalence of car presence was found for cycling journeys.With the exception of cycle lanes, all significant differencesbetween features identified by walking and cycling were inthe expected direction. The lack of cycle lanes in the studyareas may explain this finding somewhat, whereby manycycling journeys were completed on roads without cyclelanes. Improving on existing audits that do not reflect tem-poral exposure, use of the SenseCam data enabled the cap-ture of factors that individuals actually encountered duringactive transport journeys, such as traffic density; weatherconditions; presence of pedestrians, cyclists, and dogs; andtemporary obstructions to walking or cycling.Almost a quarter of data were lost due to images being

too dark to enable coding of features. In part, this is likelydue to the study being conducted during winter, with

Cars driving, pedestrians, pedestrian crossing, rain, road good condition, trees

Cars driving, cycle lane, dark, other lights, pedestrian crossing, road good condition

Trees, walkway

Trees, walkway Cars driving, cycle lane, footpath, road goodcondition

Congested traffic, cars driving, footpath, footpath good condition, grass verge, grass

verge maintained, residential, retail buildings, road good condition, trees

Figure 1 Sample images and exemplar coding of features present. Note: Data were collected in Auckland, New Zealand, in June 2011.

Table 3 Journey duration characteristicsTrip duration (minutes) Cycling (n = 8) Walking (n = 21) Total (n = 29)

Mean (min, max) Mean (min, max) Mean (min, max)

Reported duration 20.1 (15.0, 53.0) 22.0 (10.0, 45.0) 21.5 (10.0, 53.0)

SenseCam duration 21.3 (9.9, 56.6) 22.3 (9.6, 60.0) 21.7 (9.6, 56.6)

Difference (Reported – SenseCam) !1.1 (!3.6, 5.2) !0.2 (!11.1, 8.6) !0.42 (!11.1, 8.6)

Notes: Data were collected in Auckland, New Zealand, in June 2011.n = number of journeys.One walking journey was extracted for the comparison between reported and SenseCam trip duration as this was not recorded on the travel diary and wouldhave biased the comparisons.

Oliver et al. International Journal of Health Geographics 2013, 12:20 Page 5 of 7http://www.ij-healthgeographics.com/content/12/1/20

Page 31: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Oliver, 2013 Int J Health Geographic 12(20)

Assessment of environment features...

on journeys between home and workplace only was a prag-matic decision to ensure manageability of data treatmentand acknowledging the significant contribution work-relatedtravel makes to overall travel behaviours. Travel diaries wereconsidered the criterion for occurrence and mode of trips

undertaken. Although GPS data can be used to identifywalking and cycling journeys, trip purpose is not captured,therefore the travel diary was deemed an appropriate mea-sure of work-related journey occurrence for the purposes ofthe current study. SenseCam images were considered the

Table 1 Description of environmental features present in walking and cycling journeysFeature Description Cycling

(n = 599†)Walking

(n = 1150†)Total

(n = 1749†)

n (%) n (%) n (%)

Bus stop Bus stop visible in photo 44 7.3 69 6.0 113 6.5

Cars driving Cars in motion or in traffic lanes on road 388 64.8 674 58.6* 1062 60.7

Cars in carpark Cars parked in car park wholly or more than2/3 partially visible

68 11.4 110 9.6 178 10.2

Cars parked Cars parked on side of the road 190 31.7 151 13.1** 341 19.5

Commercial Commercial or institutional buildings visible 281 46.9 648 56.3** 929 53.1

Congested traffic More than 6 stationary cars in driving lanes 4 0.7 10 0.9 14 0.8

Cycle lanes Designated cycle lane on road or footpath 16 2.7 247 21.5** 263 15.0

Cyclists Any person/people riding cycles other than the participant 6 1.0 8 0.7 14 0.8

Dark Image indicates journey conducted in darkness(e.g., dusk or dawn, streetlights on) but features stillvisible and image codeable†

120 20.0 209 18.2 329 18.8

Dogs Dogs or a lead in participant hand visible 0 0.0 4 0.3 4 0.2

Footpath Footpath visible (not walkway/pathway) 338 56.4 761 66.2** 1099 62.8

Footpath good condition No cracks or potholes visible 327 54.6 759 66.0** 1086 62.1

Graffiti Graffiti visible 0 0.0 2 0.2 2 0.1

Grass verge Any area of grass either beside road or footpath 270 45.1 504 43.8 774 44.3

Grass verge maintained No obvious weeds or overgrown grass 262 43.7 454 39.5 716 40.9

Litter Litter present (e.g., paper, food wrappings, etc.) 1 0.2 1 0.1 2 0.1

Other lights Lights from houses, buildings or cars in photos 247 41.2 348 30.3** 595 34.0

Pedestrian crossing Zebra crossings and traffic light pedestrian crossings visible 82 13.7 240 20.9** 322 18.4

Pedestrians Any person/people in the photo other than the participant 63 10.5 272 23.7** 335 19.2

Permanent obstructionsto cycling

Tree, signage, or other permanent structure in cycleway 2 0.3 0 0.0 2 0.1

Permanent obstructionsto walking

Tree, signage, or other permanent structure onfootpath/walkway

2 0.3 0 0.0 2 0.1

Rain Rain visible 63 10.5 54 4.7** 117 6.7

Residential Private homes visible 155 25.9 229 19.9** 384 22.0

Retail buildings Buildings with retail/shop-fronts visible 141 23.5 165 14.3** 306 17.5

Road good condition No cracks or potholes visible 462 77.1 820 71.3** 1282 73.3

Street lighting Street lights visible (not including traffic lights) 209 34.9 531 46.2** 740 42.3

Temporary obstructionsto cycling

Rubbish bins, parked cars, roadworks, etc. in cycleways 9 1.5 11 1.0 20 1.1

Temporary obstructionsto walking

Rubbish bins, parked cars, roadworks, etc. onfootpath/walkway

14 2.3 41 3.6 55 3.1

Trees Any trees visible in photo including from a distance 441 73.6 842 73.2 1283 73.4

Walkway Journey occurring in walkway/pathway(not road or footpath)

45 7.5 200 17.4** 245 14.0

Notes: Data were collected in Auckland, New Zealand, in June 2011.n = number of images.*p < 0.05; **p < 0.01 significant difference in features present between walking and cycling journeys; †If photo was too dark to code individual features then it wascoded as uncodeable and not included here; %, percentage of walking or cycling images where feature was present; n, number of images.

Oliver et al. International Journal of Health Geographics 2013, 12:20 Page 3 of 7http://www.ij-healthgeographics.com/content/12/1/20

Example features: Bus stop Cycle lanes Graffiti Grass verge Rain Street lighting Trees

Page 32: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

49 participants wore SenseCam for 3 days each, 311 accelerometer bouts randomly selected 79% of episodes could be coded (n=311)

- 14% had no associated image data (n=57) - 3% unsure of activity from images (n=10) - 2% images too dark to code (n=7)

Doherty, Intl J Behv Nutr Phys Act 2013 10(22)

Wearable cameras contextualising accelerometer data

Page 33: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty, Intl J Behv Nutr Phys Act 2013 10(22)

Wearable cameras with accelerometers

Page 34: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

311 episodes coded - 12 PA Compendium categories identified - 114 PA compendium subcategories identified

59% outdoors, 39% indoors 33% leisure time, 33% transportation, 18% domestic, 15% occupational 45% episodes non-social, 33% direct social, 22% social/not-engaged

Doherty, Intl J Behv Nutr Phys Act 2013 10(22)

Wearable cameras contextualising accelerometer data

Page 35: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty, Intl J Behv Nutr Phys Act 2013 10(22)

Activity type... Activity type...

Page 36: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty, Intl J Behv Nutr Phys Act 2013 10(22)

Activity context... Activity context...

Page 37: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

“… Manual coding of the images is also time-consuming and coding errors can occur … ” 150 participants, 7 days of living, 2000 images per day = 2.1 million images !!!

Kerr 2013; Am J Prev Med 44(3): 290-296

How to manage all these images?...

Page 38: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Framework based on organising data into events

Human memory:

“… segmenting ongoing activity into distinct events is important for later memory of those activities …“ (Zacks 2006, Psychology and Aging 21:466-482)

Video retrieval:

“… automatic shot boundary detection (SBD) is an enabling function for almost all automatic structuring of video…” (Smeaton 2010, Computer Vision and Image Understanding 114(4):411-418)

Wearable accelerometers:

“… for comparison with physical activity recommendations, 10-min activity bouts were defined as 10 or more consecutive minutes above the relevant threshold …” (Troiano 2008, Med Sci Sport Exer 40(1):181-8)

Early lifelogging:

“… continuous recordings need to be segmented into manageable units so that they can be efficiently browsed and indexed …” (Lin 2006, Proc. SPIE MM Content Analysis, Management, and Retrieval)

British Heart Foundation Health Promotion Research Group

Wearable camera data management?...

Page 39: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty 2013. IEEE Pervasive 12(1); 44-47

Processing wearable camera data...

Page 40: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty 2013. IEEE Pervasive 12(1); 44-47

Processing wearable camera data...

Page 41: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty 2013. IEEE Pervasive 12(1); 44-47

Processing wearable camera data...

Page 42: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty 2011; Memory 19(7):785-795

Event segmentation overview...

Page 43: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Take WIAMIS / Yahoo! Research slides showing the nuts ‘n bolts of this…

Doherty 2011; Memory 19(7):785-795

Event segmentation overview...

Page 44: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Data divided into training and test sets with thousands of different combinations evaluated

From groundtruth we noticed: Average of 22 events groundtruthed per day

Approach Recommended: Quick segmentation (sensor values only)

Performance: F1 score of 60% against users’ semantic boundaries

Doherty 2011; Memory 19(7):785-795

Event segmentation – how good is it?...

Page 45: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Take Mediamill SVM slides showing the nuts ‘n bolts of this… highlight groundtruth construction, etc.

Staudenmayer 2012; MSSE 44(1):S61-67

Event identification overview...

Page 46: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Take Mediamill SVM slides showing the nuts ‘n bolts of this… highlight groundtruth construction, etc.

Staudenmayer 2012; MSSE 44(1):S61-67

We could use accelerometer signals...

Page 47: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Take Mediamill SVM slides showing the nuts ‘n bolts of this… highlight groundtruth construction, etc.

Staudenmayer 2012; MSSE 44(1):S61-67

We could use accelerometer signals...

Page 48: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Video of image classification using a support vector machine …

Snoek 2009; Fnd Tnd Inf Ret; 2(4):215-322

Could also use images too...

Page 49: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Take Mediamill SVM slides showing the nuts ‘n bolts of this… highlight groundtruth construction, etc.

Doherty 2011; Com Hm Behv 27(5):1948-1958

Event identification overview...

Page 50: Automatically identifying lifestyle behaviours from SenseCam images (invited talk at the London School of Economics)

Doherty; Am J Prev Med 2013; 43(5), 320 – 323

http://ajpmonline.wordpress.com/2013/04/15/using-wearable-cameras-in-your-research/

When  using  wearable  cameras  in  ethnography,  we  should:  1)  Iden<fy  episodes/ac<vi<es    &    2)  Categorise  their  behavioural  type  &  context