default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 61 matches
- 2009
- Teemu Tuomas Ahmaniemi, Vuokko Lantz:
Augmented reality target finding based on tactile cues. ICMI 2009: 335-342 - Rami Ajaj, Christian Jacquemin, Frédéric Vernier:
RVDT: a design space for multiple input devices, multipleviews and multiple display surfaces combination. ICMI 2009: 269-276 - Vicente Alabau, Daniel Ortiz, Verónica Romero, Jorge Ocampo:
A multimodal predictive-interactive application for computer assisted transcription and translation. ICMI 2009: 227-228 - Thomas Bader, Matthias Vogelgesang, Edmund Klaus:
Multimodal integration of natural gaze behavior for intention recognition during object manipulation. ICMI 2009: 199-206 - Tyler Baldwin, Joyce Y. Chai, Katrin Kirchhoff:
Communicative gestures in coreference identification in multiparty meetings. ICMI 2009: 211-218 - Dan Bohus, Eric Horvitz:
Dialog in the open world: platform and applications. ICMI 2009: 31-38 - Cynthia Breazeal:
Living better with robots. ICMI 2009: 1-2 - Stephen A. Brewster:
Head-up interaction: can we break our addiction to the screen and keyboard? ICMI 2009: 151-152 - Justine Cassell, Kathleen Geraghty, Berto Gonzalez, John Borland:
Modeling culturally authentic style shifting with virtual peers. ICMI 2009: 135-142 - Ginevra Castellano, André Pereira, Iolanda Leite, Ana Paiva, Peter W. McOwan:
Detecting user engagement with a robot companion using task and social interaction-based features. ICMI 2009: 119-126 - Sunsern Cheamanunkul, Evan Ettinger, Matthew Jacobsen, Patrick Lai, Yoav Freund:
Detecting, tracking and interacting with people in a public space. ICMI 2009: 79-86 - Lei Chen, Mary P. Harper:
Multimodal floor control shift detection. ICMI 2009: 15-22 - Neil Cooke, Martin J. Russell:
Cache-based language model adaptation using visual attention for ASR in meeting scenarios. ICMI 2009: 87-90 - David Demirdjian, Chenna Varri:
Recognizing events with temporal random forests. ICMI 2009: 293-296 - Prasenjit Dey, Ramchandrula Sitaram, Rahul Ajmera, Kalika Bali:
Voice key board: multimodal indic text input. ICMI 2009: 313-318 - Angelika Dierker, Christian Mertes, Thomas Hermann, Marc Hanheide, Gerhard Sagerer:
Mediated attention with multimodal augmented reality. ICMI 2009: 245-252 - Bruno Dumas, Rolf Ingold, Denis Lalanne:
Benchmarking fusion engines of multimodal interactive systems. ICMI 2009: 169-176 - Bruno Dumas, Denis Lalanne, Rolf Ingold:
HephaisTK: a toolkit for rapid prototyping of multimodal interfaces. ICMI 2009: 231-232 - Jan B. F. van Erp, Peter J. Werkhoven, Marieke E. Thurlings, Anne-Marie Brouwer:
Navigation with a passive brain based interface. ICMI 2009: 225-226 - Giuseppe Di Fabbrizio, Thomas Okken, Jay G. Wilpon:
A speech mashup framework for multimodal mobile services. ICMI 2009: 71-78 - Rui Fang, Joyce Y. Chai, Fernanda Ferreira:
Between linguistic attention and gaze fixations inmultimodal conversational interfaces. ICMI 2009: 143-150 - Katayoun Farrahi, Daniel Gatica-Perez:
Learning and predicting multimodal daily life patterns from cell phones. ICMI 2009: 277-280 - Ian R. Fasel, Masahiro Shiomi, Pilippe-Emmanuel Chadutaud, Takayuki Kanda, Norihiro Hagita, Hiroshi Ishiguro:
Multi-modal features for real-time detection of human-robot interaction categories. ICMI 2009: 127-134 - Victor S. Finomore, Dianne K. Popik, Douglas Brungart, Brian D. Simpson:
Multi-modal communication system. ICMI 2009: 229-230 - Sebastian Germesin, Theresa Wilson:
Agreement detection in multiparty conversation. ICMI 2009: 7-14 - Eve E. Hoggan, Roope Raisamo, Stephen A. Brewster:
Mapping information to audio and tactile icons. ICMI 2009: 327-334 - Hendrik Iben, Hannes Baumann, Carmen Ruthenbeck, Tobias Klug:
Visual based picking supported by context awareness: comparing picking performance using paper-based lists versus lists presented on a head mounted display with contextual support. ICMI 2009: 281-288 - Kentaro Ishizuka, Shoko Araki, Kazuhiro Otsuka, Tomohiro Nakatani, Masakiyo Fujimoto:
A speaker diarization method based on the probabilistic fusion of audio-visual location information. ICMI 2009: 55-62 - Dinesh Babu Jayagopi, Daniel Gatica-Perez:
Discovering group nonverbal conversational patterns with topics. ICMI 2009: 3-6 - Michael Johnston:
Building multimodal applications with EMMA. ICMI 2009: 47-54
skipping 31 more matches
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-10-20 07:40 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint