


default search action
Journal on Multimodal User Interfaces, Volume 14
Volume 14, Number 1, March 2020
- Vincenzo Lussu, Radoslaw Niewiadomski
, Gualtiero Volpe, Antonio Camurri:
The role of respiration audio in multimodal analysis of movement qualities. 1-15 - Wei Wei, Qingxuan Jia, Yongli Feng, Gang Chen, Ming Chu:
Multi-modal facial expression feature based on deep-neural networks. 17-23 - David Rudi, Peter Kiefer
, Ioannis Giannopoulos, Martin Raubal:
Gaze-based interactions in the cockpit of the future: a survey. 25-48 - Ahmed Housni Alsswey, Hosam Al-Samarraie
:
Elderly users' acceptance of mHealth user interface (UI) design-based culture: the moderator role of age. 49-59 - Mriganka Biswas
, Marta Romeo
, Angelo Cangelosi
, Ray Jones
:
Are older people any different from younger people in the way they want to interact with robots? Scenario based survey. 61-72 - Hiroki Tanaka
, Hidemi Iwasaka, Hideki Negoro, Satoshi Nakamura:
Analysis of conversational listening skills toward agent-based social skills training. 73-82 - Justin Mathew
, Stéphane Huot, Brian F. G. Katz:
Comparison of spatial and temporal interaction techniques for 3D audio trajectory authoring. 83-100 - Gowdham Prabhakar
, Aparna Ramakrishnan, Modiksha Madan, L. R. D. Murthy, Vinay Krishna Sharma, Sachin Deshmukh, Pradipta Biswas
:
Interactive gaze and finger controlled HUD for cars. 101-121 - Hayoung Jeong, Taeho Kang, Jiwon Choi, Jong Kim
:
A comparative assessment of Wi-Fi and acoustic signal-based HCI methods on the practicality. 123-137
Volume 14, Number 2, June 2020
- Myounghoon Jeon, Areti Andreopoulou
, Brian F. G. Katz:
Auditory displays and auditory user interfaces: art, design, science, and research. 139-141 - Stephen Roddy
, Brian Bridges
:
Mapping for meaning: the embodied sonification listening model and its implications for the mapping problem in sonic information design. 143-151 - Joseph W. Newbold
, Nicolas E. Gold, Nadia Bianchi-Berthouze
:
Movement sonification expectancy model: leveraging musical expectancy theory to create movement-altering sonifications. 153-166 - Steven Landry, Myounghoon Jeon:
Interactive sonification strategies for the motion and emotion of dance performances. 167-186 - Katharina Groß-Vogt
, Matthias Frank, Robert Höldrich:
Focused Audification and the optimization of its parameters. 187-198 - Rafael N. C. Patrick
, Tomasz R. Letowski, Maranda E. McBride
:
A multimodal auditory equal-loudness comparison of air and bone conducted sounds. 199-206 - Andrea Lorena Aldana Blanco
, Steffen Grautoff
, Thomas Hermann
:
ECG sonification to support the diagnosis and monitoring of myocardial infarction. 207-218 - Jindrich Matousek, Zdenek Krnoul, Michal Campr, Zbynek Zajíc, Zdenek Hanzlícek
, Martin Gruber
, Marie Kocurová:
Speech and web-based technology to enhance education for pupils with visual impairment. 219-230
Volume 14, Number 3, September 2020
- Thomas Pietrzak
, Marcelo M. Wanderley
:
Haptic and audio interaction design. 231-233 - James Leonard
, Jérôme Villeneuve, Alexandros Kontogeorgakopoulos:
Multisensory instrumental dynamics as an emergent paradigm for digital musical creation. 235-253 - Yuri De Pra
, Stefano Papetti
, Federico Fontana
, Hanna Järveläinen
, Michele Simonato:
Tactile discrimination of material properties: application to virtual buttons for professional appliances. 255-269 - Sebastian Merchel
, Mehmet Ercan Altinsoy
:
Psychophysical comparison of the auditory and tactile perception: a survey. 271-283 - Aditya Tirumala Bukkapatnam
, Philippe Depalle, Marcelo M. Wanderley
:
Defining a vibrotactile toolkit for digital musical instruments: characterizing voice coil actuators, effects of loading, and equalization of the frequency response. 285-301 - Charlotte Magnusson
, Kirsten Rassmus-Gröhn
, Bitte Rydeman
:
Developing a mobile activity game for stroke survivors - lessons learned. 303-312
Volume 14, Number 4, December 2020
- Seungwon Kim
, Mark Billinghurst
, Kangsoo Kim
:
Multimodal interfaces and communication cues for remote collaboration. 313-319 - Seungwon Kim
, Gun A. Lee
, Mark Billinghurst
, Weidong Huang
:
The combination of visual communication cues in mixed reality remote collaboration. 321-335 - Jing Yang
, Prasanth Sasikumar
, Huidong Bai, Amit Barde
, Gábor Sörös, Mark Billinghurst
:
The effects of spatial auditory and visual cues on mixed reality remote collaboration. 337-352 - Austin Erickson
, Nahal Norouzi
, Kangsoo Kim
, Ryan Schubert
, Jonathan Jules, Joseph J. LaViola
, Gerd Bruder
, Gregory F. Welch
:
Sharing gaze rays for visual target identification tasks in collaborative augmented reality. 353-371 - Theophilus Teo
, Mitchell Norman
, Gun A. Lee
, Mark Billinghurst
, Matt Adcock:
Exploring interaction techniques for 360 panoramas inside a 3D reconstructed scene for mixed reality remote collaboration. 373-385 - Jianlong Zhou
, Simon Luo
, Fang Chen
:
Effects of personality traits on user trust in human-machine collaborations. 387-400

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.