default search action
Journal on Multimodal User Interfaces, Volume 11
Volume 11, Number 1, March 2017
- Xiao-Li Guo, Ting-Ting Yang:
Gesture recognition based on HMM-FNN model using a Kinect. 1-7 - Cristian A. Torres-Valencia, Mauricio A. Álvarez, Álvaro Ángel Orozco-Guitiérrez:
SVM-based feature selection methods for emotion recognition from multimodal data. 9-23 - Tim Vets, Luc Nijs, Micheline Lesaffre, Bart Moens, Federica Bressan, Pieter Colpaert, Peter Lambert, Rik Van de Walle, Marc Leman:
Gamified music improvisation with BilliArT: a multimodal installation with balls. 25-38 - Alan Del Piccolo, Davide Rocchesso:
Non-speech voice for sonic interaction: a catalogue. 39-55 - Marine Taffou, Jan Ondrej, Carol O'Sullivan, Olivier Warusfel, Isabelle Viaud-Delmon:
Judging crowds' size by ear and by eye in virtual reality. 57-65 - Ayoung Hong, Dong Gun Lee, Heinrich H. Bülthoff, Hyoung Il Son:
Multimodal feedback for teleoperation of multiple mobile robots in an outdoor environment. 67-80 - Merel M. Jung, Mannes Poel, Ronald Poppe, Dirk K. J. Heylen:
Automatic recognition of touch gestures in the corpus of social touch. 81-96 - Thi Thuong Huyen Nguyen, Charles Pontonnier, Simon Hilt, Thierry Duval, Georges Dumont:
VR-based operating modes and metaphors for collaborative ergonomic design of industrial workstations. 97-111 - Gérard Bailly:
Critical review of the book "Gaze in Human-Robot Communication". 113-114
Volume 11, Number 2, June 2017
- Benjamin Weiss, Ina Wechsung, Stefan Hillmann, Sebastian Möller:
Multimodal HCI: exploratory studies on effects of first impression and single modality ratings in retrospective evaluation. 115-131 - Youngsun Kim, Jaedong Lee, Gerard Jounghyun Kim:
Design and application of 2D illusory vibrotactile feedback for hand-held tablets. 133-148 - Alexy Bhowmick, Shyamanta M. Hazarika:
An insight into assistive technology for the visually impaired and blind people: state-of-the-art and future trends. 149-172 - Paola Salomoni, Catia Prandi, Marco Roccetti, Lorenzo Casanova, Luca Marchetti, Gustavo Marfia:
Diegetic user interfaces for virtual environments with HMDs: a user experience study with oculus rift. 173-184 - Yuya Chiba, Takashi Nose, Akinori Ito:
Cluster-based approach to discriminate the user's state whether a user is embarrassed or thinking to an answer to a prompt. 185-196 - Jérémy Lacoche, Thierry Duval, Bruno Arnaldi, Eric Maisel, Jérôme Royan:
Providing plasticity and redistribution for 3D user interfaces using the D3PART model. 197-210 - Julian Abich IV, Daniel J. Barber:
The impact of human-robot multimodal communication on mental workload, usability preference, and expectations of robot behavior. 211-225 - Sunil Kumar, Manas Kamal Bhuyan, Biplab Ketan Chakraborty:
Extraction of texture and geometrical features from informative facial regions for sign language recognition. 227-239
Volume 11, Number 3, September 2017
- Hansol Kim, Kun Ha Suh, Eui Chul Lee:
Multi-modal user interface combining eye tracking and hand gesture recognition. 241-250 - Roman Hak, Tomás Zeman:
Consistent categorization of multimodal integration patterns during human-computer interaction. 251-265 - S. Devadethan, Geevarghese Titus:
An ICA based head movement classification system using video signals. 267-276 - Justin Mathew, Stéphane Huot, Brian F. G. Katz:
Survey and implications for the design of new 3D audio production and authoring tools. 277-287 - Jaedong Lee, Changhyeon Lee, Gerard Jounghyun Kim:
Vouch: multimodal touch-and-voice input for smart watches under difficult operating conditions. 289-299
Volume 11, Number 4, December 2017
- Radu-Daniel Vatavu:
Characterizing gesture knowledge transfer across multiple contexts of use. 301-314 - Youngwon R. Kim, Euijai Ahn, Gerard Jounghyun Kim:
Evaluation of hand-foot coordinated quadruped interaction for mobile applications. 315-325 - Hernán F. García, Mauricio A. Álvarez, Álvaro Á. Orozco:
Dynamic facial landmarking selection for emotion recognition using Gaussian processes. 327-340
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.