Ce projet de recherche doctorale est publié a été réalisé par Catherine PELACHAUD

Description d'un projet de recherche doctoral

Context-sensitive generation of multimodal behaviours

Mots clés :

Résumé du projet de recherche (Langue 1)

The PhD will take part of the European project ARIA - VALUSPA (Affective Retrieval Interface Assistants – using Virtual Agents with Linguistic Understanding, Social skills and Personalised Aspects). The Affective Retrieval of Information Assistants, ARIA, are Embodied Conversational Agents with Linguistic Understanding, Social skills, and Personalised Aspects. They should be able to communicate using the same verbal and non-verbal modalities used in human-human interaction, have interpersonal skills akin to those of humans, and adapt to the user in terms of learning their preferences, personality, and manner of interaction. The agents should also be capable of dealing with unexpected situations, such as the user suddenly changing topic or task, the social group changing when a second user arrives, the user interrupting the ECA, or even the user suddenly changing its attitude. The aim of the PhD is to develop a computational model of the behaviours of the ECA that would convey information at different levels: • Interpersonal stance, • overall communicative behaviours, • emergence of synchrony with user's behaviour, • multimodal response to unexpected situations The work will make use of an existing ECA platform, Greta (Ochs et al, 2013). In particular we will extend our previous model of multimodal behaviours (Chollet et al., 2014) where we apply sequence mining on data from a corpus to extract frequent sequences for different types of attitude and communicative expressions and to use them as data to generate non-verbal behaviours for the ECAs. To create an adaptive ECA, we will use a reinforcement algorithm to update the efficiency of a non-verbal behaviour used to communicate a given intention and/or emotional state. The reinforcement signal will be the achievement of the communicative intention and/or emotional state. The non-verbal behaviour of the ECA will be selected based on its efficiency computed dynamically during the interaction.

Résumé du projet de recherche (Langue 2)

We will simulate various types of behaviour responses to unexpected situations: interruption of behaviour (arising from the stop of current intention) that could be followed by a repair action or an hold (of the gesture), the coarticulation of a behaviour into another one (as the current intention is followed by the instantiation of a new intention), the merge of a behaviour with another one (to adapt the current intention to a new one). Interruption and holding of a behaviour (be a gesture, a facial expression, a gaze behaviour or a torso movement) will be modelled at the level of the behaviour realizer of the virtual agent platform. To coarticulate within another behaviour requires some re-planning to compute which behaviour should appear next; that is which intention that is linked to a next behaviour is triggered.