default search action
ICMI 2023: Paris, France - Companion Publication
- Elisabeth André, Mohamed Chetouani, Dominique Vaufreydaz, Gale M. Lucas, Tanja Schultz, Louis-Philippe Morency, Alessandro Vinciarelli:
International Conference on Multimodal Interaction, ICMI 2023, Companion Volume, Paris, France, October 9-13, 2023. ACM 2023
Late Breaking Results
- Yuqing Zhou, Yijia An, Qisong Niu, Qinglei Bu, Yung C. Liang, Mark Leach, Jie Sun:
A Portable Ball with Unity-based Computer Game for Interactive Arm Motor Control Exercise. 1-5 - Pieter Wolfert, Gustav Eje Henter, Tony Belpaeme:
"Am I listening?", Evaluating the Quality of Generated Data-driven Listening Motion. 6-10 - Tamim Ahmed, Thanassis Rikakis, Aisling Kelliher, Mohammad Soleymani:
ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors. 11-15 - Ayaka Onodera, Riku Ishioka, Yuuki Nishiyama, Kaoru Sezaki:
Assessing Infant and Toddler Behaviors through Wearable Inertial Sensors: A Preliminary Investigation. 16-20 - Aurélien Léchappé, Aurélien Milliat, Cédric Fleury, Mathieu Chollet, Cédric Dumas:
Characterization of collaboration in a virtual environment with gaze and speech signals. 21-25 - Konstantin Kuznetsov, Michael Barz, Daniel Sonntag:
Detection of contract cheating in pen-and-paper exams through the analysis of handwriting style. 26-30 - Fábio Barros, António J. S. Teixeira, Samuel S. Silva:
Developing a Generic Focus Modality for Multimodal Interactive Environments. 31-35 - Merel M. Jung, Mark Van Vlierden, Werner Liebregts, Itir Önal Ertugrul:
Do Body Expressions Leave Good Impressions? - Predicting Investment Decisions based on Pitcher's Body Expressions. 36-40 - Muhammad Riyyan Khan, Shahzeb Naeem, Usman Tariq, Abhinav Dhall, Malik Nasir Afzal Khan, Fares Al-Shargie, Hasan Al-Nashash:
Exploring Neurophysiological Responses to Cross-Cultural Deepfake Videos. 41-45 - Crystal Yang, Karen Arredondo, Jung In Koh, Paul Taele, Tracy Hammond:
HEARD-LE: An Intelligent Conversational Interface for Wordle. 46-50 - Alisa Barkar, Mathieu Chollet, Béatrice Biancardi, Chloé Clavel:
Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public Speaking. 51-55 - Björn Severitt, Nora Jane Castner, Olga Lukashova-Sanz, Siegfried Wahl:
Leveraging gaze for potential error prediction in AI-support systems: An exploratory analysis of interaction with a simulated robot. 56-60 - Stéphane Viollet, Chauvet Martin, Ingargiola Jean-Marc:
LinLED: Low latency and accurate contactless gesture interaction. 61-65 - Meehae Song, Steve DiPaola:
Multimodal Entrainment in Bio-Responsive Multi-User VR Interactives. 66-70 - Setareh Nasihati Gilani, Kimberly A. Pollard, David R. Traum:
Multimodal Prediction of User's Performance in High-Stress Dialogue Interactions. 71-75 - Sutirtha Chakraborty, Joseph Timoney:
Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual Cues. 76-80 - Armand Deffrennes, Lucile Vincent, Marie Pivette, Kevin El Haddad, Jacqueline Deanna Bailey, Monica Perusquía-Hernández, Soraia M. Alarcão, Thierry Dutoit:
The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications. 81-85 - Koji Inoue, Divesh Lala, Keiko Ochi, Tatsuya Kawahara, Gabriel Skantze:
Towards Objective Evaluation of Socially-Situated Conversational Robots: Assessing Human-Likeness through Multimodal User Behaviors. 86-90 - Everlyne Kimani, Alexandre L. S. Filipowicz, Hiroshi Yasuda:
Understanding the Physiological Arousal of Novice Performance Drivers for the Design of Intelligent Driving Systems. 91-95 - Muxiao Sun, Qinglei Bu, Ying Hou, Xiaowen Ju, Limin Yu, Eng Gee Lim, Jie Sun:
Virtual Reality Music Instrument Playing Game for Upper Limb Rehabilitation Training. 96-100
Tutorials
- Paul Pu Liang, Louis-Philippe Morency:
Tutorial on Multimodal Machine Learning: Principles, Challenges, and Open Questions. 101-104 - Sean Andrist, Dan Bohus, Zongjian Li, Mohammad Soleymani:
Platform for Situated Intelligence and OpenSense: A Tutorial on Building Multimodal Interactive Applications for Research. 105-106
Deminstrations and Exhibits
- Stefano Papetti, Eric Larrieux, Martin Fröhlich:
A Versatile Finger-Interaction Device with Audio-Tactile Feedback. 107-108 - Takeshi Saga, Jieyeon Woo, Alexis Gerard, Hiroki Tanaka, Catherine Achard, Satoshi Nakamura, Catherine Pelachaud:
An Adaptive Virtual Agent Platform for Automated Social Skills Training. 109-111 - Nguyen Tan Viet Tuyen, Viktor Schmuck, Oya Çeliktutan:
Gesticulating with NAO: Real-time Context-Aware Co-Speech Gesture Generation for Human-Robot Interaction. 112-114 - Catherine Neubauer:
HAT3: The Human Autonomy Team Trust Toolkit. 115-118 - Masatoshi Hamanaka:
Melody Slot Machine II: Sound Enhancement with Multimodal Interface. 119-120
The Third International Workshop on Automated Assessment of Pain (AAP)
- Tobias B. Ricken, Peter Bellmann, Sascha Gruss, Hans A. Kestler, Steffen Walter, Friedhelm Schwenker:
Pain Recognition Differences between Female and Male Subjects: An Analysis based on the Physiological Signals of the X-ITE Pain Database. 121-130 - Prasanth Murali, Mehdi Arjmand, Matias Volonte, Zixi Li, James W. Griffith, Michael K. Paasche-Orlow, Timothy W. Bickmore:
Towards Automated Pain Assessment using Embodied Conversational Agents. 131-140
ACE Workshop: how Artificial Character Embodiment shapes user behaviour in multi-modal interactions
- Natalia Kalashnikova, Mathilde Hutin, Ioana Vasilescu, Laurence Devillers:
Do We Speak to Robots Looking Like Humans As We Speak to Humans? A Study of Pitch in French Human-Machine and Human-Human Interactions. 141-145 - Andrey Goncharov, Özge Nilay Yalçin, Steve DiPaola:
Expectations vs. Reality: The Impact of Adaptation Gap on Avatars in Social VR Platforms. 146-153 - Julia Ayache, Marta Bienkiewicz, Kathleen Richardson, Benoît G. Bardy:
eXtended Reality of socio-motor interactions: Current Trends and Ethical Considerations for Mixed Reality Environments Design. 154-158 - Spatika Sampath Gujran, Merel M. Jung:
Multimodal prompts effectively elicit robot-initiated social touch interactions. 159-163
The GENEA Workshop 2023: The 3rd Workshop on Generation and Evaluation of Non-verbal Behaviour for Embodied Agents
- Nada Alalyani, Nikhil Krishnaswamy:
A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents. 164-173 - Geunmo Kim, Jaewoong Yoo, Hyedong Jung:
Co-Speech Gesture Generation via Audio and Text Feature Engineering. 174-178 - Weiyu Zhao, Liangxiao Hu, Shengping Zhang:
DiffuGesture: Generating Human Gesture From Two-person Dialogue With Diffusion Models. 179-185 - Ankur Chemburkar, Shuhong Lu, Andrew Feng:
Discrete Diffusion for Co-Speech Gesture Synthesis. 186-192 - Rodolfo L. Tonoli, Leonardo B. de M. M. Marques, Lucas H. Ueda, Paula Dornhofer Paro Costa:
Gesture Generation with Diffusion Models Aided by Speech Activity Information. 193-199 - Anna Lea Reinwarth, Tanja Schneeberger, Fabrizio Nunnari, Patrick Gebhard, Uwe Altmann, Janet Wessler:
Look What I Made It Do - The ModelIT Method for Manually Modeling Nonverbal Behavior of Socially Interactive Agents. 200-204 - Mounika Kanakanti, Shantanu Singh, Manish Shrivastava:
MultiFacet: A Multi-Tasking Framework for Speech-to-Sign Language Generation. 205-213 - Viktor Schmuck, Nguyen Tan Viet Tuyen, Oya Çeliktutan:
The KCL-SAIR team's entry to the GENEA Challenge 2023 Exploring Role-based Gesture Generation in Dyadic Interactions: Listener vs. Speaker. 214-219 - Gwantae Kim, Yuanming Li, Hanseok Ko:
The KU-ISPL entry to the GENEA Challenge 2023-A Diffusion Model for Co-speech Gesture generation. 220-227 - Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache:
Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent. 228-237
4th International Workshop on Multimodal Affect and Aesthetic Experience - MAAE 2023
- Yann Frachi, Guillaume Chanel, Mathieu Barthet:
Affective gaming using adaptive speed controlled by biofeedback. 238-246 - Steve DiPaola, Suk Kyoung Choi:
Art creation as an emergent multimodal journey in Artificial Intelligence latent space. 247-253 - Vasileios Tsampallas, Laura Renshaw-Vuillier, Fred Charles, Theodoros Kostoulas:
Emotions and Gambling: Towards a Computational Model of Gambling Experience. 254-258
Workshop on Multimodal Conversational Agents for People with Neurodevelopmental Disorders (MCAPND)
- Rajagopal A., Nirmala V., Immanuel Johnraja Jebadurai, Arun Muthuraj Vedamanickam, Prajakta Uthaya Kumar:
Design of Generative Multimodal AI Agents to Enable Persons with Learning Disability. 259-271 - Eleonora Aida Beccaluva, Marta Curreri, Giulia Da Lisca, Pietro Crovari:
Using Implicit Measures to Assess User Experience in Children: A Case Study on the Application of the Implicit Association Test (IAT). 272-281
Workshop on Multimodal, interactive interfaces for education (MIIE)
- Martina Galletti, Eleonora Pasqua, Francesca Bianchi, Manuela Calanca, Francesca Padovani, Daniele Nardi, Donatella Tomaiuoli:
A Reading Comprehension Interface for Students with Learning Disorders. 282-287 - Alina Glushkova, Dimitrios Makrygiannis, Sotirios Manitsaris:
Embodied edutainment experience in a museum: discovering glass-blowing gestures. 288-291 - Taichi Higasa, Keitaro Tanaka, Qi Feng, Shigeo Morishima:
Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability. 292-296 - Daniel C. Tozadore, Soizic Gauthier, Barbara Bruno, Chenyang Wang, Jianling Zou, Lise Aubin, Dominique Archambault, Mohamed Chetouani, Pierre Dillenbourg, David Cohen, Salvatore Maria Anzalone:
The iReCheck project: using tablets and robots for personalised handwriting practice. 297-301 - Stefano Papetti, Eric Larrieux, Martin Fröhlich:
The TouchBox MK3: An Open-Source Device for Finger-Based Interaction with Advanced Auditory and Vibrotactile Feedback. 302-305 - Marjorie Armando, Isabelle Régner, Magalie Ochs:
Toward a Tool Against Stereotype Threat in Math: Children's Perceptions of Virtual Role Models. 306-310
The 5th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data in the Wild (MSECP-Wild)
- Alex-Razvan Ispas, Théo Deschamps-Berger, Laurence Devillers:
A multi-task, multi-modal approach for predicting categorical and dimensional emotions. 311-317 - Steve DiPaola, Meehae Song:
Combining Artificial Intelligence, Bio-Sensing and Multimodal Control for Bio-Responsive Interactives. 318-322 - Garima Sharma, Shreya Ghosh, Abhinav Dhall, Munawar Hayat, Jianfei Cai, Tom Gedeon:
GraphITTI: Attributed Graph-based Dominance Ranking in Social Interaction Videos. 323-329 - Joshua Y. Kim, Kalina Yacef:
Guidelines for designing and building an automated multimodal textual annotation system. 330-336 - Théo Deschamps-Berger, Lori Lamel, Laurence Devillers:
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations. 337-343 - Auriane Boudin, Roxane Bertrand, Stéphane Rauzy, Matthis Houlès, Thierry Legou, Magalie Ochs, Philippe Blache:
SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal. 344-352
4th Workshop on Social Affective Multimodal Interaction for Health
- Jeffrey A. Brooks, Vineet Tiruvadi, Alice Baird, Panagiotis Tzirakis, Haoqi Li, Chris Gagne, Moses Oh, Alan Cowen:
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions. 353-358 - Joan Fruitet, Mélodie Fouillen, Valentine Facque, Hanna Chainay, Stéphanie De Chalvron, Franck Tarpin-Bernard:
Engaging with an embodied conversational agent in a computerized cognitive training: an acceptability study with the elderly. 359-362 - Marion Ristorcelli, Emma Gallego, Kévin Nguy, Jean-Marie Pergandi, Rémy Casanova, Magalie Ochs:
Investigating the Impact of a Virtual Audience's Gender and Attitudes on a Human Speaker. 363-367 - Zixiu Wu, Rim Helaoui, Diego Reforgiato Recupero, Daniele Riboni:
Towards Effective Automatic Evaluation of Generated Reflections for Motivational Interviewing. 368-373
4th ICMI Workshop on Bridging Social Sciences and AI for Understanding Child Behaviour (WOCBU)
- Peitong Li, Hui Lu, Ronald W. Poppe, Albert Ali Salah:
Automated Detection of Joint Attention and Mutual Gaze in Free Play Parent-Child Interactions. 374-382 - Dhia-Elhak Goumri, Thomas Janssoone, Leonor Becerra-Bonache, Abdellah Fourtassi:
Automatic Detection of Gaze and Smile in Children's Video Calls. 383-388 - Bruno Carlos Dos Santos Melício, Linyun Xiang, Emily Dillon, Latha Soorya, Mohamed Chetouani, Andras Sarkany, Peter Kun, Kristian Fenech, András Lörincz:
Composite AI for Behavior Analysis in Social Interactions. 389-397 - Seyma Takir, Elif Toprak, Pinar Uluer, Duygun Erol Barkana, Hatice Kose:
Exploring the Potential of Multimodal Emotion Recognition for Hearing-Impaired Children Using Physiological Signals and Facial Expressions. 398-405 - Olga V. Frolova, Aleksandr Nikolaev, Platon Grave, Elena E. Lyakso:
Speech Features of Children with Mild Intellectual Disabilities. 406-413 - Samy Tafasca, Anshul Gupta, Nada Kojovic, Mirko Gelsomini, Thomas Maillart, Michela Papandrea, Marie Schaer, Jean-Marc Odobez:
The AI4Autism Project: A Multimodal and Interdisciplinary Approach to Autism Diagnosis and Stratification. 414-425 - Bruno Tafur, Staci Weiss, Marwa Mahmoud:
Towards early prediction of neurodevelopmental disorders: Computational model for Face Touch and Self-adaptors in Infants. 426-434
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.