


default search action
8th ICMI 2006: Banff, Alberta, Canada
- Francis K. H. Quek, Jie Yang, Dominic W. Massaro, Abeer A. Alwan, Timothy J. Hazen:

Proceedings of the 8th International Conference on Multimodal Interfaces, ICMI 2006, Banff, Alberta, Canada, November 2-4, 2006. ACM 2006, ISBN 1-59593-541-X - Ted Warburton:

Weight, weight, don't tell me. 1 - M. Sile O'Modhrain:

Movement and music: designing gestural interfaces for computer-based musical instruments. 2 - Herbert H. Clark:

Mixing virtual and actual. 3
Poster Session 1
- Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen

:
Collaborative multimodal photo annotation over digital paper. 4-11 - Maria Danninger, Tobias Kluge, Rainer Stiefelhagen:

MyConnector: analysis of context cues to predict human availability for communication. 12-19 - Rebecca Lunsford

, Sharon L. Oviatt:
Human perception of intended addressee during computer-assisted meetings. 20-27 - Massimo Zancanaro

, Bruno Lepri, Fabio Pianesi:
Automatic detection of group functional roles in face to face interactions. 28-34 - Hari Krishna Maganti, Daniel Gatica-Perez

:
Speaker localization for microphone array-based ASR: the effects of accuracy on overlapping speech. 35-38 - Cosmin Munteanu, Gerald Penn

, Ronald Baecker, Yuecheng Zhang:
Automatic speech recognition for webcasts: how good is good enough and what to do when it isn't. 39-42 - Tomoko Yonezawa, Noriko Suzuki

, Shinji Abe, Kenji Mase, Kiyoshi Kogure:
Cross-modal coordination of expressive strength between voice and gesture for personified media. 43-50 - Norbert Reithinger, Patrick Gebhard, Markus Löckelt, Alassane Ndiaye, Norbert Pfleger, Martin Klesen:

VirtualHuman: dialogic and affective interaction with virtual characters. 51-58 - Miroslav Melichar, Pavel Cenek:

From vocal to multimodal dialogue management. 59-67 - Mary Ellen Foster

, Tomas By, Markus Rickert
, Alois C. Knoll:
Human-Robot dialogue for joint construction tasks. 68-71 - Eric Schweikardt, Mark D. Gross:

roBlocks: a robotic construction kit for mathematics and science education. 72-75
Oral session 1: speech and gesture integration
- Edward Tse, Saul Greenberg, Chia Shen:

GSI demo: multiuser gesture/speech interaction over digital tables by wrapping single user applications. 76-83 - C. Mario Christoudias, Kate Saenko, Louis-Philippe Morency, Trevor Darrell:

Co-Adaptation of audio-visual speech and gesture classifiers. 84-91 - Timo Sowa:

Towards the integration of shape-related information in 3-D gestures and speech. 92-99
Oral session 2: perception and feedback
- Michael Rohs, Georg Essl:

Which one is better?: information navigation techniques for spatially aware handheld displays. 100-107 - Jennifer L. Burke, Matthew S. Prewett, Ashley A. Gray, Liuquin Yang, Frederick R. B. Stilson, Michael D. Coovert, Linda R. Elliott, Elizabeth S. Redden:

Comparing the effects of visual-auditory and visual-tactile feedback on user performance: a meta-analysis. 108-117 - Robert G. Malkin, Datong Chen, Jie Yang, Alex Waibel:

Multimodal estimation of user interruptibility for smart mobile telephones. 118-125
Demonstration session
- E. Karpov, Imre Kiss, Jussi Leppänen, Jesper Ø. Olsen, Daniela Oria, S. Sivadas, Jilei Tian:

Short message dictation on Symbian series 60 mobile phones. 126-127 - Antoine Fillinger, Stéphane Degré, Imad Hamchi, Vincent Stanford:

The NIST smart data flow system II multimodal data transport infrastructure. 128 - Péter Pál Boda:

A contextual multimodal integrator. 129-130 - Paulo Barthelmess, Edward C. Kaiser, Xiao Huang, David McGee, Philip R. Cohen

:
Collaborative multimodal photo annotation over digital paper. 131-132 - Vladimír Bergl, Martin Cmejrek, Martin Fanta, Martin Labský, Ladislav Serédi, Jan Sedivý, Lubos Ures:

CarDialer: multi-modal in-vehicle cellphone control application. 133-134 - Erina Takikawa, Koichi Kinoshita, Shihong Lao, Masato Kawade:

Gender and age estimation system robust to pose variations. 135-136 - Koichi Kinoshita, Yong Ma, Shihong Lao, Masato Kawade:

A fast and robust 3D head pose and gaze estimation system. 137-138
Special poster session on human computing
- Zhihong Zeng, Yuxiao Hu, Yun Fu, Thomas S. Huang, Glenn I. Roisman, Zhen Wen:

Audio-visual emotion recognition in adult attachment interview. 139-145 - George Caridakis, Lori Malatesta, Loïc Kessous, Noam Amir, Amaryllis Raouzaiou, Kostas Karpouzis:

Modeling naturalistic affective states via facial and vocal expressions recognition. 146-154 - Wen Dong

, Jonathan Gips, Alex Pentland:
A 'need to know' system for group classification. 155-161 - Michel François Valstar

, Maja Pantic, Zara Ambadar, Jeffrey F. Cohn:
Spontaneous vs. posed facial behavior: automatic analysis of brow actions. 162-170 - Ludo Maat, Maja Pantic:

Gaze-X: adaptive affective multimodal interface for single-user office scenarios. 171-178 - Zsófia Ruttkay, Dennis Reidsma

, Anton Nijholt
:
Human computing, virtual humans and artificial imperfection. 179-184
Oral session 3: language understanding and content analysis
- Lei Chen, Mary P. Harper, Zhongqiang Huang:

Using maximum entropy (ME) model to incorporate gesture cues for SU detection. 185-192 - Shaolin Qu, Joyce Y. Chai:

Salience modeling based on non-verbal modalities for spoken language understanding. 193-200 - Athanasios K. Noulas, Ben J. A. Kröse:

EM detection of common origin of multi-modal cues. 201-208
Oral session 4: collaborative systems and environments
- Alexander M. Arthur, Rebecca Lunsford

, Matt Wesson, Sharon L. Oviatt:
Prototyping novel collaborative multimodal systems: simulation, data collection and analysis tools for the next decade. 209-216 - Jiazhi Ou, Yanxin Shi, Jeffrey Wong, Susan R. Fussell

, Jie Yang:
Combining audio and video to predict helpers' focus of attention in multiparty remote collaboration on physical tasks. 217-224 - QianYing Wang, Alberto Battocchi, Ilenia Graziola, Fabio Pianesi, Daniel Tomasini, Massimo Zancanaro

, Clifford Nass:
The role of psychological ownership and ownership markers in collaborative working environment. 225-232
Special oral session: special session on human computing
- Jeffrey F. Cohn:

Foundations of human computing: facial expression and emotion. 233-238 - Maja Pantic, Alex Pentland, Anton Nijholt

, Thomas S. Huang:
Human computing and machine understanding of human behavior: a survey. 239-248 - Volker Blanz:

Computing human faces for human viewers: automated animation in photographs and paintings. 249-256
Poster session 2
- Rutger Rienks, Dong Zhang, Daniel Gatica-Perez

, Wilfried M. Post
:
Detection and application of influence rankings in small group meetings. 257-264 - Kevin Smith, Sileye O. Ba, Daniel Gatica-Perez

, Jean-Marc Odobez
:
Tracking the multi person wandering visual focus of attention. 265-272 - Rebecca Lunsford

, Sharon L. Oviatt, Alexander M. Arthur:
Toward open-microphone engagement for multiparty interactions. 273-280 - Michael Voit, Rainer Stiefelhagen:

Tracking head pose and focus of attention with multiple far-field cameras. 281-286 - Louis-Philippe Morency, C. Mario Christoudias, Trevor Darrell:

Recognizing gaze aversion gestures in embodied conversational discourse. 287-294 - Matthias Rath, Michael Rohs:

Explorations in sound for tilting-based interfaces. 295-301 - Mario J. Enriquez, Karon E. MacLean

, Christian Chita:
Haptic phonemes: basic building blocks of haptic communication. 302-309 - Nasim Melony Vafai, Shahram Payandeh, John Dill:

Toward haptic rendering for a virtual dissection. 310-317 - Osamu Morikawa, Sayuri Hashimoto, Tsunetsugu Munakata, Junzo Okunaka:

Embrace system for remote counseling. 318-325 - Francis K. H. Quek, David McNeill, Francisco Oliveira:

Enabling multimodal communications for enhancing the ability of learning for the visually impaired. 326-332 - Matthew S. Prewett, Liuquin Yang, Frederick R. B. Stilson, Ashley A. Gray, Michael D. Coovert, Jennifer L. Burke, Elizabeth S. Redden, Linda R. Elliott:

The benefits of multimodal information: a meta-analysis comparing visual and visual-tactile feedback. 333-338
Oral session 5: speech and dialogue systems
- Peng Liu, Frank K. Soong:

Word graph based speech rcognition error correction by handwriting input. 339-346 - Edward C. Kaiser:

Using redundant speech and handwriting for learning new vocabulary and understanding abbreviations. 347-356 - Pilar Manchón Portillo, Guillermo Pérez García, Gabriel Amores Carredano:

Multimodal fusion: a new hybrid strategy for dialogue systems. 357-363
Oral session 6: interfaces and usability
- Tao Lin, Atsumi Imamiya:

Evaluating usability based on multimodal information: an empirical study. 364-371 - Thomas N. Smyth, Arthur E. Kirkpatrick:

A new approach to haptic augmentation of the GUI. 372-379 - Nadia Mana, Fabio Pianesi:

HMM-based synthesis of emotional facial expressions during speech in synthetic talking heads. 380-387
Panel
- Francis K. H. Quek:

Embodiment and multimodality. 388-390

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














