annotation of emotions in meetings in the AMI project
description
Transcript of annotation of emotions in meetings in the AMI project
![Page 1: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/1.jpg)
humaine wp6 workshop10/03/2005 1
annotation of emotionsin meetings
in theAMI project
Roeland Ordelman* & Dirk HeylenHuman Media Interaction
University of TwenteThe Netherlands
![Page 2: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/2.jpg)
humaine wp6 workshop10/03/2005 2
overview• on the AMI project: ~100 hours of
audio/video recordings of project meetings
• on annotating the emotional state of the participants in the meetings
• on emotion annotation in AMI• on implementing the annotation tool in
the NXT framework
![Page 3: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/3.jpg)
humaine wp6 workshop10/03/2005 3
AMI in brief• European Integrated Project of the IST 6th FWP,
initiated in January 2004 and involving 15 partners • aims at advancing the state-of-the-art in important
basic technologies such as human-human communication modeling, speech recognition, computer vision, multimedia indexing and retrieval
• produces tools for off-line and on-line browsing of multi-modal meeting data, including meeting structure analysis and summarizing functions
• makes recorded and annotated multimodal meeting data widely available for the European research community, thereby contributing to the research infrastructure in the field
![Page 4: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/4.jpg)
humaine wp6 workshop10/03/2005 4
on the AMI meetings• scenario-based meetings
– participants are asked to carry out a certain task and are provided with role-restricted information
– design project, involve one particular task (remote control design)
– ~35 hours (48 meetings)• real design project meetings on
– real design projects– such as engineering student projects, brochure or poster
design, etc. – ~35 hours
• other real or scenario-based meetings– cover a wider variety of meeting types, topics, behaviours,
etc. – ~30 hours
![Page 5: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/5.jpg)
humaine wp6 workshop10/03/2005 5
• smart meeting rooms at three different sites (TNO, Edinburgh, IDIAP)
– video (side camera, close up)– audio (mic-array, lapel, headset, manikin)– whiteboard strokes– pen
on the recording of the meetings
![Page 6: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/6.jpg)
humaine wp6 workshop10/03/2005 6
ami annotations• many features of meeting interactions are
annotated:– speech– gestures– dialogue acts– posture– emotion
• information is used for (at least) three purposes: – primary meta-data source for multi-featured browsing of the
recorded meetings.– training recognition algorithms that eventually should be
able to provide automatically generated meta-data. – evidence on the basis of which theoretical models of human
multi party interaction can be developed.
![Page 7: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/7.jpg)
humaine wp6 workshop10/03/2005 7
properties of the data• ~100 hours of meeting data• emotion annotation of every single
speaker in meeting: 4 x 100 hours !• only a very small proportion can be
expected to show emotional states in any strong sense
• neutral state will cover much data
![Page 8: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/8.jpg)
humaine wp6 workshop10/03/2005 8
emotion annotation• no general agreement on how to annotate or
label emotional content in a natural database
• a number of emotion annotation or labeling schemes have been proposed in the literature
• given the gradations and subtlety of emotions occurring in natural data, the labeling of emotion using category labels is not straightforward
![Page 9: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/9.jpg)
humaine wp6 workshop10/03/2005 9
emotion annotation in AMIGiven:
1. the amount of data to be annotated, and2. the expected gradations and subtlety of emotions
occurring in meeting data
a dimensional labeling approach complemented by a categorical labeling scheme seemed most appropriate in the context of AMI
the FeelTrace software developed at Queens University Belfast (Cowie et al.) is reported to produce good quality annotations within a reasonable amount of time
![Page 10: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/10.jpg)
humaine wp6 workshop10/03/2005 10
FeelTrace• judge the emotional experience of the
participants of the meetings on two dimensions: arousal and valence.
![Page 11: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/11.jpg)
humaine wp6 workshop10/03/2005 11
annotation with FeelTrace in AMI
• survey on meeting specific landmarks• pilot annotations• re-implementation of FeelTrace
![Page 12: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/12.jpg)
humaine wp6 workshop10/03/2005 12
landmarks survey (1)• (main investigator: Vincent Wan, University of
Sheffield)• Method:
– list of 243 terms describing emotions– participants (37) had to select twenty emotions that they
most frequently perceived in their meetings– participants from various companies and with various job
descriptions, including lecturers, researchers, managers, secretaries and students.
– 243 terms were clustered by meaning into groups. The most frequently chosen one or two labels were shortlisted from each group. Taking some labels from each group ensures that there is sufficent coverage of the emotion space.
![Page 13: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/13.jpg)
humaine wp6 workshop10/03/2005 13
landmarks survey (2)• list of 26 `meeting domain specific’
emotional words:
at ease, bored, joking, annoyed, nervous, satisfied, frustrated, amused, relaxed, interested, cheerful, uninterested, disappointed, agreeable, contemplative, encouraging, sceptical, friendly, attentive, confused, confident, decisive, impatient, concerned, serious, curious
![Page 14: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/14.jpg)
humaine wp6 workshop10/03/2005 14
landmarks survey (3)• second survey to determine where each of
the shortlisted labels should appear in FeelTrace emotional space:– first presented participants with the five labels:
anger, irritation, sadness, happiness and contentment (presented so that participants unfamiliar with FeelTrace would get some minimal experience in its use).
– emotional words were presented twice: the first to allow the participant to gain additional training and the second to collect data.
![Page 15: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/15.jpg)
humaine wp6 workshop10/03/2005 15
example results 1
![Page 16: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/16.jpg)
humaine wp6 workshop10/03/2005 16
example results 2
![Page 17: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/17.jpg)
humaine wp6 workshop10/03/2005 17
example results 3
![Page 18: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/18.jpg)
humaine wp6 workshop10/03/2005 18
resulting landmark distribution
![Page 19: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/19.jpg)
humaine wp6 workshop10/03/2005 19
annotation trials• we have 15-20 annotators available• coming weeks: first annotation trials• what do we want to learn:
– inter-annotator agreement– distribution of emotional states in meeting
data (how much is neutral)– annotator experience with tool– effect of using/not using landmarks– validation of manual, annotation area, etc
![Page 20: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/20.jpg)
humaine wp6 workshop10/03/2005 20
trial annotation set-up• set of 4 meetings, each of about 20
minutes, 4 speakers per meeting• 10 minute segments (0-10,10-20)• (opt.) `dummy’ pass with Feeltrace• `offical’ pass with FeelTrace• with and without landmarks• no categorical labeling (yet)
![Page 21: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/21.jpg)
humaine wp6 workshop10/03/2005 21
implementation in NXT• Belfast implementation of dimensional
approach knows some limitations:– no cross-platform support– cannot easily be tailored to specific needs
from different `stakeholders’ of annotations (e.g., additional categorical of labeling of longer segments is hard in current setup)
![Page 22: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/22.jpg)
humaine wp6 workshop10/03/2005 22
stakeholders of annotation• corpus developer
– defines the structure of corpus, maintains the data, takes care of a proper data/tool distribution (CVS, validation, time)
• corpus annotator– needs to know as little as possible about configuration,
installation and version control issues, only the annotation process itself is of concern for the annotator
• data consumer– interested in the annotations for analysis and may want to
configure in detail the annotation process (tool functionalities such as redo, fastforward, landmarks/no-landmarks, ect)
• tool developer– creates the tool that serves the needs of the other users,
keeping technical issues in mind
![Page 23: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/23.jpg)
humaine wp6 workshop10/03/2005 23
NXT• Nite XML toolkit • defines a data storage format that can easily
be shared across a multitude of annotation and analysis tools for the many different aspects of a multi-modal corpus.
• the NXT libraries provide many ready-made components that facilitate easy development of new tools.
• expertise readily available at University of Twente
![Page 24: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/24.jpg)
humaine wp6 workshop10/03/2005 24
draft list of requirements• easy distribution of data (and tools) to and from the annotators (e.g., annotate
certain segments of a file, CVS functionality, batch functionality)• easy management of multiple annotators, possibly working on the same data
(e.g., CVS functionality)• validation functionality or possibilities to plug this into the process• as many input formats as possible on as many platforms as possible• customizable landmarks, dimensions (1D, 2D, 3D), shortcuts• categorical labeling options within the tool• video control (fastforward/backward)• progress bar• replay annotation aligned with video• selection of multiple video signals (close up, wide angle) • color coding of the 2D space (provide a priori color feedback instead of
feedback with a changing color of the cursor)• easy configuration in general
→ discuss requirements with AMI/HUMAINE researchers
![Page 25: annotation of emotions in meetings in the AMI project](https://reader036.fdocuments.us/reader036/viewer/2022062520/56815c37550346895dca220a/html5/thumbnails/25.jpg)
humaine wp6 workshop10/03/2005 25
roadmap • starting first trial annotations• investigate trial results• discuss re-implementation of FeelTrace in
NXT with interested parties and create an first implementation version
• follow-up trials, FeelTrace-NXT versions• monitor process and discuss results with
researchers in the field (AMI, Humaine)