Quels sont les indicateurs qui permettent de lire l’état émotionel du sujet ?

21
Quels sont les indicateurs qui permettent de lire l’état émotionel du sujet ? Cinématique du Mouvement Daniel Lewkowicz 1

description

Cinématique du Mouvement. Quels sont les indicateurs qui permettent de lire l’état émotionel du sujet ?. Daniel Lewkowicz. Introduction. - PowerPoint PPT Presentation

Transcript of Quels sont les indicateurs qui permettent de lire l’état émotionel du sujet ?

SERVICE DES RELATIONS INTERNATIONALES

Quels sont les indicateurs qui permettent de lire ltat motionel du sujet ?Cinmatique du Mouvement

Daniel Lewkowicz

1Heider & Simmel (1944) showed to their participants a visual scene on which different figures moved in various directions and at various speeds. They observed that, even when they asked their participants to describe this scene in geometrical terms, the movements were described in terms of intentions and emotions were attributed to the figures.Introduction

2Michotte (1950) showed that different interpersonal emotions are evoked by the mere movements of geometric figures.

Rim, Boulanger, Laubin, Richir & Stroobants (1985) find similar results and even went further by showing that certain interpersonal emotions evoked by the kinematics of geometrical figures are crosscultural.A series of developmental studies have shown that the ability to interpret simple geometrical shapes as intentional agents simply from their kinematic properties is also present in young infants (see Scholl & Tremoulet, 2000, for a review).

One set of stimuli, the FrithHapp animations, have been widely used in neuroimaging studies (Castelli, Happ, Frith, & Frith, 2000; Gobbini, Koralek, Bryan, Montgomery, & Haxby, 2007; Moriguchi, Ohnishi, Mori, Matsuda, & Komaki, 2007), => Autism (Castelli, Frith, Happ, & Frith, 2002; Kana, Keller, Cherkassky, Minshew, & Just, 2009), => Schizophrenia (Horan et al., 2009; Koelkebeck et al., 2010), => various other psychopathological conditions (Bird, Castelli, Malik, Frith, & Husain, 2004; Fyfe, Williams, Mason, & Pickup, 2008; Lawrence et al., 2007; Moriguchi et al., 2006; Rosenbaum, Stuss, Levine, & Tulving, 2007).Introduction

Analyse cinmatique des animations Frith-HappRoux et al. (2013)4

Roux et al. (2013)Analyse cinmatique des animations Frith-Happ5

Analyse cinmatique des animations Frith-HappRoux et al. (2013)6

7

Rsultats en oculomtrieRoux et al. (2013)

Conclusion :

Il est possible dobtenir une mesure implicite de lattribution dintention aux animations Frith-Happ8Lexemple de la marcheConclusion :Il est possible partir de la cinmatique de discriminer le genre, mais aussi lge, ltat mental, les actions ou les intentions.Comment le vrifier ?(Johansson, 73, 76; Barclay, Cutting, & Kozlowski, 1978; Blakemore & Decety, 2001; Dittrich, Troscianko, Lea, & Morgan, 1996; Mather & Murdoch, 1994; Pollick, Paterson, Bruderlin, & Sanford, 2001; Runeson, 1994; Troje, 2002a, 2002b).9Wallbott (1998) observed that there seem to be distinctive patterns of movement and postural behavior associated with certain emotions.

Non-fluent body movements express anger, fear, and joy, while fluent ones express sadness, boredom, and happiness.

A social interactive game adapted Jungle SpeedEcological interactive protocolStandardized relative positions (start and object)Realtime control (Qualysis 4 camera-system)Lewkowicz, D., Delevoye-Turrell, Y., Bailly, D., Andry, P., & Gaussier, P. (2013). Reading motor intention through mental imagery. Adaptive Behavior, 21(5), 315-327.10

We designed a ecological interactive protocol that was an adapted version of the jungle speed game in which the game is to reach as fast as possible to the dowel in the center of the table.We were specifically interested in the preparatory movement, and the rewarding movements depending on who has grasp the dowel first in the competitive move.For example, if the agent won the previous round, she will take the dowel and place it near her. If the partner win the previous round, she will place it near him.So there was three different goals but the first motor element was maintain the same, which is a reach to grasp from the starting position to the object position. We then used the video recording of the trials to show to human observer.More importantly because the reach-to-grasp movement is always the same from the starting position to the object position.We only show them the reach to grasp and we stop the movie at the exact time when fingers are in contact with the object and we asked the observer to select one of the three different category for that movement.10Experimental procedure26 participants viewed 192 trials (48*4 blocks)

Task: judge the agents intention

3 different key presses to select:Initiate the Game (Play)Place in my workspace (Me)Place in your workspace (You)

At the end of each block : Self-Evaluation (Analogical Scales)11

26 young adults participated to the study in which their task was to judge the agent intention based on the early part of movement showed on the movies.Their could give their classification on the keypad whether the movement they just see was to be followed with a placing movement at the center of the table, or to place in the agent workplace or in the partner workspance. At the end of each block each participants fill in a self evaluation scale to check if they judge their classification performance tp be good or not.11

Results :

Conclusions:Human participants can read intentions from early movement kinematics.Under-estimated ability close to chance level.12

The results we found are showed here with performance rates for each category of intention.Because it was a three forced choice, the random chance level was set at 33%.As you can see, human performance was significantly higher than chance level but more importantly it seems that participants were judging themselves lower than their actual performance.In conclusion, here we found that yes, human can read intention on the sole basis of early movement kinematics, and even at preferred speed only.However, this ability seems is usually under-estimated because it didnt appear consciously to participant. So the next question is, is it possible for an artificial system to learn how to discriminate between those three different intentions ?12A complex problem

()Artificial Neural Network ClassifierLearning algorithm : backpropagation (FANN, Nissen 2005)10000 epochs, goal : MSE < 10-53 hidden units3 outputs1()23424Max : 24 inputs480 different networks(24 input sizes * 20 networks)Quantity of movement informationDatabase : 192 sequences13We designed a simple architecture and a learning procedure to ask a further question : How much of movement information does an artificial system need to be able to correcty classify the three different intentions above chance level ? We then used a multi layer perceptron.The output layer was set with three units according to the three different answer that could be given. We also used 3 hidden units but we manipulate the quantity of inputs we used on that system. For example that system have to learn the association between the correct category (in the three output) to the quantity of velocity information we give to it by small portions of 50ms.We design a total of 480 different networks and we then check the performance after learning in an evaluation procedure and we verify the classifications rates.13Conclusions :A simple neural network classifier can sucessfully categorize motor intentions - as well as humans - with only low-level kinematics.A simple solution

End of first motor element(video stop)20 different networks for each input size14Here you can see on the X axis the input size relative to the total movement that was percieved by the multi-layer perceptron and the correct classification rates are shown on the Y-axis.On overall, indeed the more movement was seen, and the higher was the classification rates with some networks performing near 100% after 1.2 seconds of movement informations.Before 400ms it was not possible for the architecture to learn the difference only based on movement kinematics. But more importantly, here we show that the first moment in time that the correct rates were above chance level was 450ms which is just before the end of the first motor element (blue line).At this specific time, the classifications rates are similar to the human performance. In conclusion, here we have shown that a simple neural network without any high-level cognitive ability was able to categorize the three different intentions with only low-level kinematics informations.14

Variability is not RANDOMLY DISTRIBUTED !15Overlap area : 31,4% No Overlap : 68,6%Conclusion : Meilleure reconnaissance si lexemple est loign de la solution optimale.One very important results we found in the case of the observation study is that the kinematics analysis of the 192 different stimuli revealed a strong overlap. Here you can see a representation of the variability of each reach-to-grasp movement that were used as stimuli in the observation studies. We first conduct a principal component analysis and we find that the two paramater that explain the best the variability are the movement time (X axis) and amplitude of peak velocity (Y axis), and each point is a reach-to-grasp movement.As you can see here, the variability according to this two parameters is not randomly distributed. In fact we suggest that the overlapping area can be seen as an optimal solution that can work for almost every situation (Play, Me, You). And this is the solution that would also been found by a alogrithm based on optimal functions which are often used in robotics.More importantly the small deviations from an optimal performance seems to be the very specific cue that was used by human to guide their judgement. When looking at the performance for the movies that are outside the overlapping area we found much better classification rates but still not 100%.So it might be possible that in human some additional cognitive process could be used to understand actions which could be complementary to kinematics, as for example the context, or prior knowledge that can sharpen their intention judgment. 15

Computational Model for Mental State Inference (Oztop et al. 2005)16

17Hypothses NeurophysiologiquesGrosbras et al. 2006

Grosbras et al. 2006; Grezs et al. 2007; Pichon et al. 2008; Pichon et al. 2009 18Quelques exemples pour terminerEmotions possibles :

Joie Peur Tristesse Colre Dgout

19Source : Atkinson et al. (2004, 2007)https://community.dur.ac.uk/a.p.atkinson/Stimuli.htmlConclusionsAu moins 4 niveaux dobservation du mouvement biologique (Troje, 2008):1) Life Detector2) Structure from motion3) Action recognition4) Style recognition5) Social coningencies ?2021Merci de votre attention