


default search action
12th ACII 2024: Glasgow, UK
- 12th International Conference on Affective Computing and Intelligent Interaction, ACII 2024, Glasgow, United Kingdom, September 15-18, 2024. IEEE 2024, ISBN 979-8-3315-1643-7
- Bin Han, Cleo Yau, Su Lei, Jonathan Gratch:
Knowledge-Based Emotion Recognition Using Large Language Models. 1-9 - Yasaman Etesam, Özge Nilay Yalçin, Chuxuan Zhang, Angelica Lim:
Emotional Theory of Mind: Bridging Fast Visual Processing with Slow Linguistic Reasoning. 10-18 - Sarra Graja, P. George Lovell, Ken Scott-Brown:
AffectRankTrace: A Tool for Continuous and Discrete Affective Annotation During Extended Usability Trials. 19-26 - Ryoya Ito, Celso M. de Melo, Jonathan Gratch, Kazunori Terada:
Emotional Expression Help Regulate the Appropriate Level of Cooperation with Agents. 27-36 - Tara Nourivandi, Saandeep Aathreya, Shaun J. Canavan:
Multimodal Behavior Analysis and Impact of Culture on Affect. 37-45 - A'di Dust, Pat Levitt, Maja J. Mataric:
Behind the Smile: Mental Health Implications of Mother-Infant Interactions Revealed Through Smile Analysis. 46-54 - Varun Reddy, Zhiyuan Wang, Emma R. Toner, Maria A. Larrazabal, Mehdi Boukhechba, Bethany A. Teachman, Laura E. Barnes:
AudioInsight: Detecting Social Contexts Relevant to Social Anxiety from Speech. 55-62 - Yoon Kyung Lee, Jina Suh, Hongli Zhan, Junyi Jessy Li, Desmond C. Ong:
Large Language Models Produce Responses Perceived to be Empathic. 63-71 - Thus Karnjanapatchara, Sixia Li, Candy Olivia Mawalim, Kazunori Komatani, Shogo Okada:
Incremental Multimodal Sentiment Analysis for HAIs Based on Multitask Active Learning with Interannotator Agreement. 72-79 - Thorben Ortmann, Qi Wang, Larissa Putzar:
EmojiHeroVR: A Study on Facial Expression Recognition Under Partial Occlusion from Head-Mounted Displays. 80-88 - Motoaki Sato, Takahisa Uchida, Yuichiro Yoshikawa, Celso M. de Melo, Jonathan Gratch, Kazunori Terada:
People Negotiate Better with Emotional Human-Like Virtual Agents Than Android Robots. 89-98 - Marina Tiuleneva, Emanuele Castano, Radoslaw Niewiadomski:
How Do We Perceive the Intensity of Facial Expressions? The PIFE Dataset for Analysis of Perceived Intensity. 99-107 - Sanjeev Nahulanthran, Mor Vered, Leimin Tian, Dana Kulic:
"I Think you Need Help! Here's why": Understanding the Effect of Explanations on Automatic Facial Expression Recognition. 108-115 - Mirella Hladký, Rúbia Reis Guerra, Xi Laura Cang, Karon E. MacLean, Patrick Gebhard, Tanja Schneeberger:
Modeling the 'Kiss my Ass' -Smile: Appearance and Functions of Smiles in Negative Social Situations. 116-124 - Prasanth Murali, Natasha Yamane, Javier Hernandez, Stacy Marsella, Matthew S. Goodwin, Timothy W. Bickmore:
Feeling-the-Beat: Enhancing Empathy and Engagement During Public Speaking Through Heart Rate Sharing. 125-133 - Ryo Ueda, Hiromi Narimatsu, Yusuke Miyao, Shiro Kumano:
VAD Emotion Control in Visual Art Captioning via Disentangled Multimodal Representation. 134-141 - Mihail Miller, Stephan Klingner, Ann-Kristin Meyer, Richard Aude:
Learning from, with and Without the Interdependencies of Valence-Arousal-Dominance and Their Connection with Basic Emotions. 142-150 - Lorenzo Parenti, Ziggy O'Reilly, Davide Ghiglino, Federica Floris, Tiziana Priolo, Marwen Belkaid, Agnieszka Wykowska:
How Preference Towards Robotic Agents Affects Choice Accuracy in Children with Autism Spectrum Disorder. 151-158 - Imran Khan:
Operationalising Social Bonding in Human-Robot Dyads Through Physiological and Biobehavioural Proxies of Oxytocin. 159-167 - Imran Khan, Robert Lowe:
Surprise! Using Physiological Stress for Allostatic Regulation Under the Active Inference Framework. 168-175 - Toshiki Onishi, Asahi Ogushi, Ryo Ishii, Atsushi Fukayama, Akihiro Miyata:
Prediction of Praising Skills Based on Multimodal Information. 176-184 - Silvan Mertes, Dominik Schiller, Michael Dietz, Elisabeth André, Florian Lingenfelser:
The AffectToolbox: Affect Analysis for Everyone. 185-193 - Kosmas Pinitas, Nemanja Rasajski, Matthew Barthet, Maria Kaselimi, Konstantinos Makantasis, Antonios Liapis, Georgios N. Yannakakis:
Varying the Context to Advance Affect Modelling: A Study on Game Engagement Prediction. 194-202 - Mostafa M. Amin, Björn W. Schuller:
On Prompt Sensitivity of ChatGPT in Affective Computing. 203-209 - Philipp Müller, Alexander Heimerl, Sayed Muddashir Hossain, Lea Siegel, Jan Alexandersson, Patrick Gebhard, Elisabeth André, Tanja Schneeberger:
Recognizing Emotion Regulation Strategies from Human Behavior with Large Language Models. 210-218 - Antonia Petrogianni, Lefteris Kapelonis, Nikolaos Antoniou, Sofia Eleftheriou, Petros Mitseas, Dimitris Sgouropoulos, Athanasios Katsamanis, Theodoros Giannakopoulos, Shrikanth Narayanan:
RobuSER: A Robustness Benchmark for Speech Emotion Recognition. 219-227 - Akshat Choube, Vedant Das Swain, Varun Mishra:
SeSaMe: A Framework to Simulate Self-Reported Ground Truth for Mental Health Sensing Studies. 228-237 - Jingyu Xin, Brooks Gump, Stephen Maisto, Randall Jorgensen, Tej Bhatia, Vir V. Phoha, Asif Salekin:
Decoding Hostility from Conversations Through Speech and Text Integration. 238-246 - Erik van Haeringen, Charlotte Gerritsen:
Emotion Contagion in Avatar-Mediated Group Interactions. 247-256 - Matthew Barthet, Diogo Branco, Roberto Gallotta, Ahmed Khalifa, Georgios N. Yannakakis:
Closing the Affective Loop via Experience-Driven Reinforcement Learning Designers. 257-265 - Gayathri Soman, M. V. Judy, Sanjay Madria:
Regret Emotion Based Reinforcement Learning for Path Planning in Autonomous Agents. 266-274 - Deniz Iren, Daniel Stanley Tan:
Unilateral Facial Action Unit Detection: Revealing Nuanced Facial Expressions. 275-282 - Nicole Lai-Tan, Marios G. Philiastides, Fani Deligianni:
Fusion of Spatial and Riemannian Features to Enhance Detection of Gait Adaptation Mental States During Rhythmic Auditory Stimulation. 283-290 - Anargh Viswanath, Michael Gref, Teena Hassan, Christoph Schmidt:
A Multimodal, Multilabel Approach to Recognize Emotions in Oral History Interviews. 291-299 - Dimitrios Sgouropoulos, Petros Mitseas, Sofia Eleftheriou, Theodoros Giannakopoulos, Antonia Petrogianni, Lefteris Kapelonis, Nikolaos Antoniou, Athanasios Katsamanis, Shrikanth Narayanan:
Emotion-Aware Speech Popularity Prediction: A Use-Case on TED Talks. 300-308 - Jonas Paletschek, Jan Bleimling, David S. Johnson, Hanna Drimalla:
A Paradigm to Investigate Social Signals of Understanding and Their Susceptibility to Stress. 309-317 - Georgios Chochlakis, Alexandros Potamianos, Kristina Lerman, Shrikanth Narayanan:
The Strong Pull of Prior Knowledge in Large Language Models and Its Impact on Emotion Recognition. 318-326 - Sree Bhattacharyya, Shuhua Yang, James Z. Wang:
A Heterogeneous Multimodal Graph Learning Framework for Recognizing User Emotions in Social Networks. 327-336 - Ala N. Tak, Jonathan Gratch:
GPT-4 Emulates Average-Human Emotional Cognition from a Third-Person Perspective. 337-345 - Shiran Dudy, Ibrahim Said Ahmad, Ryoko Kitajima, Àgata Lapedriza:
Analyzing Cultural Representations of Emotions in LLMs Through Mixed Emotion Survey. 346-354 - Eduardo Gutiérrez-Maestro, Hadi Banaee, Amy Loutfi:
Towards Addressing Label Ambiguity in Sequential Emotional Responses Through Distribution Learning. 355-361 - Projna Paromita, Theodora Chaspari:
A Linguistic Analysis of the Impact of Team Interactions on Team Performance During Space Exploration Missions. 362-369 - Swarnali Banik, Sougata Sen, Snehanshu Saha, Surjya Ghosh:
Improving Continuous Emotion Annotation in Video Platforms via Physiological Response Profiling. 370-377 - Judith Amores, Kael Rowan, Javier Hernandez, Mary Czerwinski:
AImagery: A Multisensory Approach to Anxiety Reduction with AI, Olfactory Stimuli, and Biofeedback-Enhanced Guided Imagery. 378-393

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.