default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 102 matches
- 2022
- Yufeng Yin, Jiashu Xu, Tianxin Zu, Mohammad Soleymani:
X-Norm: Exchanging Normalization Parameters for Bimodal Fusion. ICMI 2022: 605-614 - Ritvik Agrawal, Shreyank Jyoti, Rohit Girmaji, Sarath Sivaprasad, Vineet Gandhi:
Does Audio help in deep Audio-Visual Saliency prediction models? ICMI 2022: 48-56 - Khalil J. Anderson:
Real-time Feedback for Developing Conversation Literacy. ICMI 2022: 701-704 - Riku Arakawa, Mayank Goel, Chris Harrison, Karan Ahuja:
RGBDGaze: Gaze Tracking on Smartphones with RGB and Depth Data. ICMI 2022: 329-336 - Ayca Aygun, Boyang Lyu, Thuan Nguyen, Zachary Haga, Shuchin Aeron, Matthias Scheutz:
Cognitive Workload Assessment via Eye Gaze and EEG in an Interactive Multi-Modal Driving Task. ICMI 2022: 337-348 - Chongyang Bai, Maksim Bolonkin, Viney Regunath, V. S. Subrahmanian:
POLLY: A Multimodal Cross-Cultural Context-Sensitive Framework to Predict Political Lying from Videos. ICMI 2022: 520-530 - Mimi Bocanegra, Mailin Lemke, Roelof Anne Jelle de Vries, Geke D. S. Ludden:
Commensality or Reverie in Eating? Exploring the Solo Dining Experience. ICMI 2022: 25-35 - Dan Bohus, Sean Andrist, Ashley Feniello, Nick Saw, Eric Horvitz:
Continual Learning about Objects in the Wild: An Interactive Approach. ICMI 2022: 476-486 - Auriane Boudin:
Interdisciplinary Corpus-based Approach for Exploring Multimodal Conversational Feedback. ICMI 2022: 705-710 - Justine Cassell:
The Future of the Body in Tomorrow's Workplace. ICMI 2022: 4 - Che-Jui Chang, Sen Zhang, Mubbasir Kapadia:
The IVI Lab entry to the GENEA Challenge 2022 - A Tacotron2 Based Method for Co-Speech Gesture Generation With Locality-Constraint Attention Mechanism. ICMI 2022: 784-789 - Nicola Corbellini:
Towards Human-Machine Collaboration: Multimodal Group Potency Estimation. ICMI 2022: 685-689 - Keith Curtis, George Awad, Shahzad Rajput, Ian Soboroff:
Second International Workshop on Deep Video Understanding. ICMI 2022: 801-802 - Tiffany D. Do, Mamtaj Akter, Zubin Datta Choudhary, Roger Azevedo, Ryan P. McMahan:
The Effects of an Embodied Pedagogical Agent's Synthetic Speech Accent on Learning Outcomes. ICMI 2022: 198-206 - Bernd Dudzik, Hayley Hung:
Exploring the Detection of Spontaneous Recollections during Video-viewing In-the-Wild using Facial Behavior Analysis. ICMI 2022: 236-246 - Bernd Dudzik, Dennis Küster, David St-Onge, Felix Putze:
The 4th Workshop on Modeling Socio-Emotional and Cognitive Processes from Multimodal Data In-the-Wild (MSECP-Wild). ICMI 2022: 803-804 - Maha Elgarf, Sahba Zojaji, Gabriel Skantze, Christopher Peters:
CreativeBot: a Creative Storyteller robot to stimulate creativity in children. ICMI 2022: 540-548 - Gauthier Robert Jean Faisandaz, Alix Goguey, Christophe Jouffrais, Laurence Nigay:
Keep in Touch: Combining Touch Interaction with Thumb-to-Finger µGestures for People with Visual Impairment. ICMI 2022: 105-116 - Yajing Feng:
Multimodal Representations and Assessments of Emotional Fluctuations of Speakers in Call Centers Conversations. ICMI 2022: 724-729 - Marc Fraile, Christine Fawcett, Joakim Lindblad, Natasa Sladoje, Ginevra Castellano:
End-to-End Learning and Analysis of Infant Engagement During Guided Play: Prediction and Explainability. ICMI 2022: 444-454 - Daniel Gatica-Perez:
Focus on People: Five Questions from Human-Centered Computing. ICMI 2022: 3 - Saeed Ghorbani, Ylva Ferstl, Marc-André Carbonneau:
Exemplar-based Stylized Gesture Generation from Speech: An Entry to the GENEA Challenge 2022. ICMI 2022: 778-783 - Amr Gomaa:
Adaptive User-Centered Multimodal Interaction towards Reliable and Trusted Automotive Interfaces. ICMI 2022: 690-695 - Masatoshi Hamanaka:
Sound Scope Pad: Controlling a VR Concert with Natural Movement. ICMI 2022: 730-732 - Satchit Hari, Ajay, Sayan Sarcar, Sougata Sen, Surjya Ghosh:
AffectPro: Towards Constructing Affective Profile Combining Smartphone Typing Interaction and Emotion Self-reporting Pattern. ICMI 2022: 217-223 - Ramin Hedeshy, Chandan Kumar, Mike Lauer, Steffen Staab:
All Birds Must Fly: The Experience of Multimodal Hands-free Gaming with Gaze and Nonverbal Voice Synchronization. ICMI 2022: 278-287 - Daria Joanna Hemmerling, Maciej Stroinski, Kamil Kwarciak, Krzysztof Trusiak, Maciej Szymkowski, Weronika Celniak, William Frier, Orestis Georgiou, Mykola Maksymenko:
Touchless touch with biosignal transfer for online communication. ICMI 2022: 579-590 - Eric Horvitz:
On the Horizon: Interactive and Compositional Deepfakes. ICMI 2022: 653-661 - Tiffany Matej Hrkalovic:
Designing Hybrid Intelligence Techniques for Facilitating Collaboration Informed by Social Science. ICMI 2022: 679-684 - Stephen Hutt, Sidney K. D'Mello:
Evaluating Calibration-free Webcam-based Eye Tracking for Gaze-based User Modeling. ICMI 2022: 224-235
skipping 72 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-12-26 09:01 CET from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint