default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 71 matches
- 2023
- Konstantin Kuznetsov, Michael Barz, Daniel Sonntag:
Detection of contract cheating in pen-and-paper exams through the analysis of handwriting style. ICMI Companion 2023: 26-30 - Rajagopal A., Nirmala V., Immanuel Johnraja Jebadurai, Arun Muthuraj Vedamanickam, Prajakta Uthaya Kumar:
Design of Generative Multimodal AI Agents to Enable Persons with Learning Disability. ICMI Companion 2023: 259-271 - Tamim Ahmed, Thanassis Rikakis, Aisling Kelliher, Mohammad Soleymani:
ASAR Dataset and Computational Model for Affective State Recognition During ARAT Assessment for Upper Extremity Stroke Survivors. ICMI Companion 2023: 11-15 - Nada Alalyani, Nikhil Krishnaswamy:
A Methodology for Evaluating Multimodal Referring Expression Generation for Embodied Virtual Agents. ICMI Companion 2023: 164-173 - Sean Andrist, Dan Bohus, Zongjian Li, Mohammad Soleymani:
Platform for Situated Intelligence and OpenSense: A Tutorial on Building Multimodal Interactive Applications for Research. ICMI Companion 2023: 105-106 - Marjorie Armando, Isabelle Régner, Magalie Ochs:
Toward a Tool Against Stereotype Threat in Math: Children's Perceptions of Virtual Role Models. ICMI Companion 2023: 306-310 - Julia Ayache, Marta Bienkiewicz, Kathleen Richardson, Benoît G. Bardy:
eXtended Reality of socio-motor interactions: Current Trends and Ethical Considerations for Mixed Reality Environments Design. ICMI Companion 2023: 154-158 - Alisa Barkar, Mathieu Chollet, Béatrice Biancardi, Chloé Clavel:
Insights Into the Importance of Linguistic Textual Features on the Persuasiveness of Public Speaking. ICMI Companion 2023: 51-55 - Fábio Barros, António J. S. Teixeira, Samuel S. Silva:
Developing a Generic Focus Modality for Multimodal Interactive Environments. ICMI Companion 2023: 31-35 - Eleonora Aida Beccaluva, Marta Curreri, Giulia Da Lisca, Pietro Crovari:
Using Implicit Measures to Assess User Experience in Children: A Case Study on the Application of the Implicit Association Test (IAT). ICMI Companion 2023: 272-281 - Auriane Boudin, Roxane Bertrand, Stéphane Rauzy, Matthis Houlès, Thierry Legou, Magalie Ochs, Philippe Blache:
SMYLE: A new multimodal resource of talk-in-interaction including neuro-physiological signal. ICMI Companion 2023: 344-352 - Jeffrey A. Brooks, Vineet Tiruvadi, Alice Baird, Panagiotis Tzirakis, Haoqi Li, Chris Gagne, Moses Oh, Alan Cowen:
Emotion Expression Estimates to Measure and Improve Multimodal Social-Affective Interactions. ICMI Companion 2023: 353-358 - Sutirtha Chakraborty, Joseph Timoney:
Multimodal Synchronization in Musical Ensembles: Investigating Audio and Visual Cues. ICMI Companion 2023: 76-80 - Ankur Chemburkar, Shuhong Lu, Andrew Feng:
Discrete Diffusion for Co-Speech Gesture Synthesis. ICMI Companion 2023: 186-192 - Armand Deffrennes, Lucile Vincent, Marie Pivette, Kevin El Haddad, Jacqueline Deanna Bailey, Monica Perusquía-Hernández, Soraia M. Alarcão, Thierry Dutoit:
The Limitations of Current Similarity-Based Objective Metrics in the Context of Human-Agent Interaction Applications. ICMI Companion 2023: 81-85 - Alice Delbosc, Magalie Ochs, Nicolas Sabouret, Brian Ravenet, Stéphane Ayache:
Towards the generation of synchronized and believable non-verbal facial behaviors of a talking virtual agent. ICMI Companion 2023: 228-237 - Théo Deschamps-Berger, Lori Lamel, Laurence Devillers:
Multiscale Contextual Learning for Speech Emotion Recognition in Emergency Call Center Conversations. ICMI Companion 2023: 337-343 - Steve DiPaola, Suk Kyoung Choi:
Art creation as an emergent multimodal journey in Artificial Intelligence latent space. ICMI Companion 2023: 247-253 - Steve DiPaola, Meehae Song:
Combining Artificial Intelligence, Bio-Sensing and Multimodal Control for Bio-Responsive Interactives. ICMI Companion 2023: 318-322 - Yann Frachi, Guillaume Chanel, Mathieu Barthet:
Affective gaming using adaptive speed controlled by biofeedback. ICMI Companion 2023: 238-246 - Olga V. Frolova, Aleksandr Nikolaev, Platon Grave, Elena E. Lyakso:
Speech Features of Children with Mild Intellectual Disabilities. ICMI Companion 2023: 406-413 - Joan Fruitet, Mélodie Fouillen, Valentine Facque, Hanna Chainay, Stéphanie De Chalvron, Franck Tarpin-Bernard:
Engaging with an embodied conversational agent in a computerized cognitive training: an acceptability study with the elderly. ICMI Companion 2023: 359-362 - Martina Galletti, Eleonora Pasqua, Francesca Bianchi, Manuela Calanca, Francesca Padovani, Daniele Nardi, Donatella Tomaiuoli:
A Reading Comprehension Interface for Students with Learning Disorders. ICMI Companion 2023: 282-287 - Setareh Nasihati Gilani, Kimberly A. Pollard, David R. Traum:
Multimodal Prediction of User's Performance in High-Stress Dialogue Interactions. ICMI Companion 2023: 71-75 - Alina Glushkova, Dimitrios Makrygiannis, Sotirios Manitsaris:
Embodied edutainment experience in a museum: discovering glass-blowing gestures. ICMI Companion 2023: 288-291 - Andrey Goncharov, Özge Nilay Yalçin, Steve DiPaola:
Expectations vs. Reality: The Impact of Adaptation Gap on Avatars in Social VR Platforms. ICMI Companion 2023: 146-153 - Dhia-Elhak Goumri, Thomas Janssoone, Leonor Becerra-Bonache, Abdellah Fourtassi:
Automatic Detection of Gaze and Smile in Children's Video Calls. ICMI Companion 2023: 383-388 - Spatika Sampath Gujran, Merel M. Jung:
Multimodal prompts effectively elicit robot-initiated social touch interactions. ICMI Companion 2023: 159-163 - Masatoshi Hamanaka:
Melody Slot Machine II: Sound Enhancement with Multimodal Interface. ICMI Companion 2023: 119-120 - Taichi Higasa, Keitaro Tanaka, Qi Feng, Shigeo Morishima:
Gaze-Driven Sentence Simplification for Language Learners: Enhancing Comprehension and Readability. ICMI Companion 2023: 292-296
skipping 41 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-11-02 01:30 CET from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint