•
Researchers have theoretically proposed that humans decode other individuals' emotions or elementary cognitive appraisals from particular sets of facial action units (AUs). However, only a few empirical studies have systematically tested... more
Researchers have theoretically proposed that humans decode other individuals' emotions or elementary cognitive appraisals from particular sets of facial action units (AUs). However, only a few empirical studies have systematically tested the relationships between the decoding of emotions/appraisals and sets of AUs, and the results are mixed. Furthermore, the previous studies relied on facial expressions of actors and no study used spontaneous and dynamic facial expressions in naturalistic settings. We investigated this issue using video recordings of facial expressions filmed unobtrusively in a real-life emotional situation, specifically loss of luggage at an airport. The AUs observed in the videos were annotated using the Facial Action Coding System. Male participants (n = 98) were asked to decode emotions (e.g., anger) and appraisals (e.g., suddenness) from facial expressions. We explored the relationships between the emotion/appraisal decoding and AUs using stepwise multiple regression analyses. The results revealed that all the rated emotions and appraisals were associated with sets of AUs. The profiles of regression equations showed AUs both consistent and inconsistent with those in theoretical proposals. The results suggest that (1) the decoding of emotions and appraisals in facial expressions is implemented by the perception of set of AUs, and (2) the profiles of such AU sets could be different from previous theories.
Doi: 10.3389/fpsyg.2018.02678
Publication Date: 2018
Publication Name: Frontiers in Psychology
Research Interests:
•
Publication Date: 2005
Publication Name: 2005 IEEE International Conference on Multimedia and Expo
Research Interests:
•
Publication Date: 2006
Publication Name: Lecture Notes in Computer Science
Research Interests:
•
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic... more
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations ...
Publication Date: 2010
Publication Name: Cognitive Technologies
Research Interests:
•
Publication Date: 2016
Publication Name: Proceedings of the 18th ACM International Conference on Multimodal Interaction - ICMI 2016
•
by Stefan Kopp and Catherine Pelachaud
Publisher: Springer
Publication Date: Jan 1, 2007
Publication Name: Intelligent Virtual …
Research Interests:
•
Publisher: Springer
Publication Date: Jan 1, 2008
Publication Name: Intelligent Virtual …
Research Interests:
•
•
Abstract Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The... more
Abstract Emotional expressions play a very important role in the interaction between virtual agents and human users. In this paper, we present a new constraint-based approach to the generation of multimodal emotional displays. The displays generated with our method are not limited to the face, but are composed of different signals partially ordered in time and belonging to different modalities. We also describe the evaluation of the main features of our approach.
Journal Name: Affective Computing, IEEE Transactions on
Publication Date: Jul 2011
Research Interests:
•
•
We describe a system that allows an impostor to lead an audio-visual telephone conversation, and sign data electronically on behalf of an authorized client. During the conversation, audio and video of the impostor are altered so as to... more
We describe a system that allows an impostor to lead an audio-visual telephone conversation, and sign data electronically on behalf of an authorized client. During the conversation, audio and video of the impostor are altered so as to mimic the client. The voice of an impostor is processed and used to reproduce the voice of the authorized client. Speech segments obtained from client's recordings are used to synthesize new sentences that the client never pronounced. On the visual side, the imposter's talking face is detected ...
Publication Date: 2005
Publication Name: Lecture Notes in Computer Science
Research Interests:
•
The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow,... more
The exploration of how we react to the world and interact with it and each other remains one of the greatest scientific challenges. Latest research trends in cognitive sciences argue that our common view of intelligence is too narrow, ignoring a crucial range of abilities that matter immensely for how people do in life. This range of abilities is called social intelligence and includes the ability to express and recognise social signals produced during social interactions like agreement, politeness, empathy, friendliness, conflict, etc., coupled with the ability to manage them in order to get along well with others while winning their cooperation. Social Signal Processing (SSP) is the new research domain that aims at understanding and modelling social interactions (human-science goals), and at providing computers with similar abilities in human–computer interaction scenarios (technological goals). SSP is in its infancy, and the journey towards artificial social intelligence and socially aware computing is still long. This research agenda is twofold, a discussion about how the field is understood by people who are currently active in it and a discussion about issues that the researchers in this formative field face.
More Info: Pantic, M., Cowie, R., D'Errico, F., Heylen, D., Mehu, M., Pelachaud, C., Poggi, I., Schröder, M., & Vinciarelli, A. (2011), In T.B. Moeslund; A. Hilton; V. Krüger; L. Sigal (Eds.), Visual analysis of humans: Looking at people (pp. 511-‐538). London: Springer.
Publisher: Springer
Publication Date: Jan 1, 2011
Publication Name: Visual Analysis of …
Research Interests:
•
The ease and robustness of human-human communication is due to extremely high recognition accuracy (using multiple input channels) and the redundant and complimentary use of several modalities. Research in multimodal systems is based on... more
The ease and robustness of human-human communication is due to extremely high recognition accuracy (using multiple input channels) and the redundant and complimentary use of several modalities. Research in multimodal systems is based on the expectation that human-computer interaction can benefit from modeling several modalities in analogous ways.
Publisher: limsi.fr
Publication Date: 2000
Publication Name: Handbook of Standards and Resources for Spoken Language Systems-Supplement
Research Interests:
•
Publication Date: Nov 13, 2013
•
Publisher: ieeexplore.ieee.org
Publication Date: Jan 1, 2011
Publication Name: Affective Computing …
Research Interests: Artificial Intelligence, Human Computer Interaction, Affective Computing, Speech Recognition, Nonverbal Behavior, and 10 moreDialogue System, Software Agents, Emotional Computing, Emotion Recognition, Real Time Systems, Spoken Language Understanding, Real Time, User Behavior, Kansei Engineering, and Type System
•
Publication Name: dfki.de
•
Publisher: Published by the IEEE Computer …
Publication Date: Jan 1, 2011
Publication Name: IEEE …
Research Interests:
•
Publisher: ieeexplore.ieee.org
Publication Name: Automatic Face & …
Research Interests:
•
•
In human-to-human communication, signals from multiple channels are at work. We communicate not only through words but also by intonation, gaze, hand and body gestures and facial expressions. These verbal and nonverbal signals have a... more
In human-to-human communication, signals from multiple channels are at work. We communicate not only through words but also by intonation, gaze, hand and body gestures and facial expressions. These verbal and nonverbal signals have a role in the communicative process. They add/modify/substitute information in discourse and are highly linked with eachother. The ease and robustness of human-human communication is due to extremely high recognition accuracy using multiple input channels and the redundant and complimentary use of several modalities. Human computer interaction can benefit from modeling several modalities in analogous ways. Multimodal systems represent and manipulate information from different human communication channels at multiple levels of abstraction. The purpose of this chapter is to present current work towards establishing standards and common resources for multimodal systems. The chapter covers an overview and survey of current multimodal systems and claries the basic terminology It also provides recommendations on the different components of such systems. More particularly this chapter focuses on multimodal systems that have speech either as input or output modality The chapter discusses aspects of the various types of multimodal systems according to what speech and nonspeech modalities speech input and output is associated with It describes systems that combine speech input with information from the visual channel face detection face recognition tracking of facial features and lipreading systems using talking faces and conversational agents combination of speech with visual output are also presented. Another object of attention of the chapter is on the concepts and issues related to multimodal systems combining speech input with other input modalities as well as to systems combining speech output.
More Info: Benoit, C., Martin, J.-C., Pelachaud, C., Schomaker, L., & Suhm, B. (2000). Audio-visual and multimodal speech-based systems. In D. Gibbon, I. Mertins, & R. Moore (Eds.), Handbook of multimodal and spoken dialogue systems: Resources, terminology and product evaluation (pp. 102-203). Dordrecht: Kluwer Academic Publishers.
Publisher: Citeseer
Publication Date: Jan 1, 2000
Publication Name: Handbook of Standards …
Research Interests:
•
Publisher: Springer
Publication Date: Jan 1, 2009
Publication Name: … Signals: Cognitive and …
Research Interests: Information Systems, Computer Science, Human Computer Interaction, Gesture, Augmented Reality, and 14 moreUser Interface, Face, Virtual Worlds, Speech Processing, Speech, Computers and Society, Artificial Intelligent, Lecture notes, Standardisation, Audio Visual, Technological Development, Computers and Education, Human Machine Interaction, and Information System
•
Publisher: Springer
Publication Date: Jan 1, 2010
Publication Name: Gesture in Embodied …
Research Interests:
•
Publisher: Springer-Verlag, Berlin, Heidelberg
Publication Date: Jan 1, 2009
•
Publisher: www-public.int-edu.eu
Publication Date: 2010
•
Publisher: Springer
Publication Name: … of the Third International Conference on …
•
Abstract—In this project, which lies at the intersection between Human-Robot Interaction (HRI) and Human-Computer Interaction (HCI), we have examined the design of an open-source, real-time software platform for controlling the feedback... more
Abstract—In this project, which lies at the intersection between Human-Robot Interaction (HRI) and Human-Computer Interaction (HCI), we have examined the design of an open-source, real-time software platform for controlling the feedback provided by an AIBO robot and/or by the GRETA Embodied Conversational Agent, when listening to a story told by a human narrator. Based on ground truth data obtained from the recording and annotation of an audio-visual storytelling database, and containing various examples of human-human ...
Publisher: enterface08.limsi.fr
Publication Date: Sep 1, 2008
Publication Name: QPSR of the numediart research program. Ed. by Thierry Dutoit and Benoît Macq
Research Interests:
•
Publisher: Citeseer
Publication Date: 2010
Publication Name: Relation
Research Interests:
•
Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters... more
Our objective is to animate an embodied conversational agent (ECA) with communicative gestures rendered with the expressivity of a real human user it represents. We describe an approach to estimate a subset of expressivity parameters defined in the literature (namely spatial and temporal extent) from captured motion trajectories. We first validate this estimation against synthesis motion and then show results with real human motion. The estimated expressivity is then sent to the animation engine of an ECA that becomes a ...
Publisher: Springer
Publication Date: 2011
Publication Name: Intelligent Virtual Agents
Research Interests:
•
Publisher: dl.acm.org
Publication Date: Jan 1, 2004
Publication Name: Proceedings of the Third International …
Research Interests:
•
Relations between emotions and multimodal behaviors have mostly been studied in the case of acted basic emotions. In this paper, we describe two experiments studying these relations with a copy-synthesis approach. We start from video... more
Relations between emotions and multimodal behaviors have mostly been studied in the case of acted basic emotions. In this paper, we describe two experiments studying these relations with a copy-synthesis approach. We start from video clips of TV interviews including real-life behaviors. A protocol and a coding scheme have been defined for annotating these clips at several levels (context, emotion,
Publication Date: 2000
•
Publication Date: 2006
Publication Name: Revue d'intelligence artificielle
•
Publisher: Springer
Publication Date: Jan 1, 2005
Publication Name: Intelligent Virtual …
Research Interests:
•
Publisher: Springer
Publication Date: Jan 1, 2005
Publication Name: Modeling and Using …
Research Interests:
•
Embodied Conversational Characters: Representation Formats for Multimodal Communicative Behavioursmore
by Hannes Pirker and Catherine Pelachaud
Publication Date: 2010
Publication Name: Cognitive Technologies
Research Interests:
•
by Hannes Pirker and Catherine Pelachaud
Working with emotion-related states in technological contexts requires a standard representation format. Based on that premise, the W3C Emotion Incubator group was created to lay the foundations for such a standard. The paper reports on... more
Working with emotion-related states in technological contexts requires a standard representation format. Based on that premise, the W3C Emotion Incubator group was created to lay the foundations for such a standard. The paper reports on two results of the group’s work: a collection of use cases, and the resulting requirements. We compiled a rich collection of use cases, and grouped them into three types: data annotation, emotion recognition, and generation of emotion-related behaviour. Out of these, a structured set of requirements was distilled. It comprises the representation of the emotion-related state itself, some meta-information about that representation, various kinds of links to the “rest of the world”, and several kinds of global metadata. We summarise the work, and provide pointers to the working documents containing full details.
Publication Date: 2007
Research Interests:
•
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic... more
In order to be believable, embodied conversational agents (ECAs) must show expression of emotions in a consistent and natural looking way across modalities. The ECA has to be able to display coordinated signs of emotion during realistic emotional behaviour. Such a capability requires one to study and represent emotions and coordination of modalities during non-basic realistic human behaviour, to define languages for representing such behaviours to be displayed by the ECA, to have access to mono-modal representations such as gesture repositories. This chapter is concerned about coordinating the generation of signs in multiple modalities in such an affective agent. Designers of an affective agent need to know how it should coordinate its facial expression, speech, gestures and other modalities in view of showing emotion. This synchronisation of modalities is a main feature of emotions.
Research Interests:
•
Publication Date: 2004
Publication Name: Cognitive Technologies
Research Interests:
•
Developing an embodied conversational agent that is able to exhibit a human-like behavior while communicating with other virtual or human agents requires enriching a typical NLG architecture. The purpose of this paper is to describe our... more
Developing an embodied conversational agent that is able to exhibit a human-like behavior while communicating with other virtual or human agents requires enriching a typical NLG architecture. The purpose of this paper is to describe our efforts in this direction and to illustrate our approach to the generation of an Agent that shows a personality, a social intelligence and is able to react emotionally to events occurring in the environment, consistently with her goals and with the context in which the conversation takes place.
Publisher: cs.rutgers.edu
Publication Date: Jul 1, 2002
Publication Name: International Natural Language Generation Conference
Research Interests:
•
Publisher: portal.acm.org
Publication Date: Jan 1, 2002
Publication Name: Proceedings of the …
Research Interests:
•
Publisher: Citeseer
Publication Date: Jan 1, 2001
Publication Name: … Joint Conference on …
•
this paper we first describe our enriched discourse generator explaining the 2 sets of rules (trigger and regulation) we have added. We also review the di# erent types of gaze communicative acts. Finally we present the variables defining... more
this paper we first describe our enriched discourse generator explaining the 2 sets of rules (trigger and regulation) we have added. We also review the di# erent types of gaze communicative acts. Finally we present the variables defining the context and how they modify the computation of the display of the communicative acts.
Publisher: Citeseer
Publication Date: 2000
•
Publisher: Elsevier
Publication Date: Jan 1, 2003
Publication Name: International Journal of …
Research Interests:
•
The project has two aims: the study of mental state attributions to previously perceived non-verbal behaviours and the contribution to the non-verbal communication skills of embodied agents. For the first task, short audio-visual clips... more
The project has two aims: the study of mental state attributions to previously perceived non-verbal behaviours and the contribution to the non-verbal communication skills of embodied agents. For the first task, short audio-visual clips presenting a person in a face-to-face context with another human have been evaluated, through a forced-choice questionnaire. The questionnaire was based on the appraisal theory's items and on the attribution of emotional labels. The appraisal theory enables the understanding of mental ...
Publisher: Citeseer
Publication Date: 2009
Publication Name: ACII 2009 Affective Computing and Intelligent Interaction
•
Publication Date: 2013
Publication Name: Social Emotions in Nature and Artifact
•
Publication Date: 2013
Publication Name: Pelachaud/Emotion-Oriented Systems
•
1.1. Les agents conversationnels expressifs Ces dernières années, on observe un intérêt croissant pour le développement d'agents conversationnels animés (ACA) exprimant des émotions. Les ACAs sont des agents virtuels capables de... more
1.1. Les agents conversationnels expressifs Ces dernières années, on observe un intérêt croissant pour le développement d'agents conversationnels animés (ACA) exprimant des émotions. Les ACAs sont des agents virtuels capables de communiquer de façon autonome avec un usager, que ce soit à travers des modes verbaux ou non-verbaux. L'intérêt pour le développement d'une expressivité affective crédible chez les ACAs est motivé par la recherche de l'amé-lioration de l'interaction homme-machine (voir chapitre 8 [OCH 09] et chapitre 9 [CAN 09]). Pour être capable d'exprimer des émotions, l'agent doit avoir accès à un modèle déterminant une communication pouvant être comprise par les humains, ainsi qu'avoir des capacités techniques de communiquer non-verbalement. Les études montrent que les humains communiquent leurs émotions à travers di-verses modalités à la fois, bien que ce soit le visage que l'on considère comme le site propice pour l'expression d...
•
Publication Date: 2015
Publication Name: Proceedings 2014
Research Interests:
•
Publication Date: 2014
Publication Name: Procedia Computer Science
•
•
This volume brings together the advanced research results obtained by the European COST Action 2102 "Cross Modal Analysis of Verbal and Nonverbal Communication", primarily discussed at the PINK SSPnet-COST2102 International Conference on... more
This volume brings together the advanced research results obtained by the European COST Action 2102 "Cross Modal Analysis of Verbal and Nonverbal Communication", primarily discussed at the PINK SSPnet-COST2102 International Conference on Analysis of Verbal and Nonverbal Communication and Enactment: The Processing Issues, held in Budapest, Hungary, in September 2010. The 40 papers presented were carefully reviewed and selected for inclusion in the book. The volume is arranged into two scientific sections. The first section, Multimodal Signals: Analysis, Processing and Computational Issues, deals with conjectural and processing issues of defining models, algorithms, and heuristic strategies for data analysis, coordination of the data flow and optimal encoding of multi-channel verbal and nonverbal features. The second section, Verbal and Nonverbal Social Signals, presents original studies that provide theoretical and practical solutions to the modelling of timing synchronization between linguistic and paralinguistic expressions, actions, body movements, activities in human interaction and on their assistance for an effective human-machine interactions.
