Остановите войну!
for scientists:
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 27 matches
- 2016
- Md. Atiqur Rahman Ahad, Md. Nazmul Islam, Israt Jahan:
Action recognition based on binary patterns of action-history and histogram of oriented gradient. J. Multimodal User Interfaces 10(4): 335-344 (2016) - Areti Andreopoulou, Brian F. G. Katz:
Subjective HRTF evaluations for obtaining global similarity metrics of assessors and assessees. J. Multimodal User Interfaces 10(3): 259-271 (2016) - Anastasios G. Bakaoukas, Florin Coada, Fotis Liarokapis:
Examining brain activity while playing computer games. J. Multimodal User Interfaces 10(1): 13-29 (2016) - Hans-Peter Brückner, Sebastian Lesse, Wolfgang M. Theimer, Holger Blume:
Design space exploration of hardware platforms for interactive low latency movement sonification. J. Multimodal User Interfaces 10(1): 1-11 (2016) - Cédric Camier, Julien Boissinot, Catherine Guastavino:
On the robustness of upper limits for circular auditory motion perception. J. Multimodal User Interfaces 10(3): 285-298 (2016) - Abhinav Dhall, Roland Goecke, Tom Gedeon, Nicu Sebe:
Emotion recognition in the wild. J. Multimodal User Interfaces 10(2): 95-97 (2016) - Hajer Fradi, Jean-Luc Dugelay:
Spatial and temporal variations of feature tracks for crowd behavior analysis. J. Multimodal User Interfaces 10(4): 307-317 (2016) - Michele Geronazzo, Federico Avanzini, Federico Fontana:
Auditory navigation with a tubular acoustic model for interactive distance cues and personalized head-related transfer functions. J. Multimodal User Interfaces 10(3): 273-284 (2016) - Brian Horsak, Ronald Dlapka, Michael Iber, Anna-Maria Gorgas, Anita Kiselka, Christian Gradl, Tarique Siragy, Jakob Doppler:
SONIGait: a wireless instrumented insole device for real-time sonification of gait. J. Multimodal User Interfaces 10(3): 195-206 (2016) - M. Shamim Hossain, Ghulam Muhammad:
Audio-visual emotion recognition using multi-directional regression and Ridgelet transform. J. Multimodal User Interfaces 10(4): 325-333 (2016) - Markus Kächele, Martin Schels, Sascha Meudt, Günther Palm, Friedhelm Schwenker:
Revisiting the EmotiW challenge: how wild is it really? J. Multimodal User Interfaces 10(2): 151-162 (2016) - Samira Ebrahimi Kahou, Xavier Bouthillier, Pascal Lamblin, Çaglar Gülçehre, Vincent Michalski, Kishore Konda, Sébastien Jean, Pierre Froumenty, Yann N. Dauphin, Nicolas Boulanger-Lewandowski, Raul Chandias Ferrari, Mehdi Mirza, David Warde-Farley, Aaron C. Courville, Pascal Vincent, Roland Memisevic, Christopher Joseph Pal, Yoshua Bengio:
EmoNets: Multimodal deep learning approaches for emotion recognition in video. J. Multimodal User Interfaces 10(2): 99-111 (2016) - Brian F. G. Katz, Georgios N. Marentakis:
Advances in auditory display research. J. Multimodal User Interfaces 10(3): 191-193 (2016) - Heysem Kaya, Albert Ali Salah:
Combining modality-specific extreme learning machines for emotion recognition in the wild. J. Multimodal User Interfaces 10(2): 139-149 (2016) - Tawhidul Islam Khan, Harino Yoho:
Integrity analysis of knee joint by acoustic emission technique. J. Multimodal User Interfaces 10(4): 319-324 (2016) - Bo-Kyeong Kim, Jihyeon Roh, Suh-Yeon Dong, Soo-Young Lee:
Hierarchical committee of deep convolutional neural networks for robust facial expression recognition. J. Multimodal User Interfaces 10(2): 173-189 (2016) - Mengyi Liu, Ruiping Wang, Shaoxin Li, Zhiwu Huang, Shiguang Shan, Xilin Chen:
Video modeling and learning on Riemannian manifold for emotion recognition in the wild. J. Multimodal User Interfaces 10(2): 113-124 (2016) - Oussama Metatla, Fiore Martin, Adam Parkinson, Nick Bryan-Kinns, Tony Stockman, Atau Tanaka:
Audio-haptic interfaces for digital audio workstations. J. Multimodal User Interfaces 10(3): 247-258 (2016) - Alejandro Moreno, Ronald Poppe:
Automatic behavior analysis in tag games: from traditional spaces to interactive playgrounds. J. Multimodal User Interfaces 10(1): 63-75 (2016) - Rifat Muhammad Mueid, Chandrama Ahmed, Md. Atiqur Rahman Ahad:
Pedestrian activity classification using patterns of motion and histogram of oriented gradient. J. Multimodal User Interfaces 10(4): 299-305 (2016) - Pedro Alves Nogueira, Vasco Torres, Rui Rodrigues, Eugénio C. Oliveira, Lennart E. Nacke:
Vanishing scares: biofeedback modulation of affective player experiences in a procedural horror game. J. Multimodal User Interfaces 10(1): 31-62 (2016) - Joyeeta Singha, Rabul Hussain Laskar:
Recognition of global hand gestures using self co-articulation information and classifier fusion. J. Multimodal User Interfaces 10(1): 77-93 (2016) - Benjamin Stahl, Balaji Thoshkahna:
Design and evaluation of the effectiveness of a sonification technique for real time heart-rate data. J. Multimodal User Interfaces 10(3): 207-219 (2016) - Bo Sun, Liandong Li, Xuewen Wu, Tian Zuo, Ying Chen, Guoyan Zhou, Jun He, Xiaoming Zhu:
Combining feature-level and decision-level fusion in a hierarchical classifier for emotion recognition in the wild. J. Multimodal User Interfaces 10(2): 125-137 (2016) - Francesco Tordini, Albert S. Bregman, Jeremy R. Cooperstock:
Prioritizing foreground selection of natural chirp sounds by tempo and spectral centroid. J. Multimodal User Interfaces 10(3): 221-234 (2016) - Bartlomiej P. Walus, Sandra Pauletto, Amanda Mason-Jones:
Sonification and music as support to the communication of alcohol-related health risks to young people. J. Multimodal User Interfaces 10(3): 235-246 (2016) - Yuan Zong, Wenming Zheng, Xiaohua Huang, Keyu Yan, Jingwei Yan, Tong Zhang:
Emotion recognition in the wild via sparse transductive transfer linear discriminant analysis. J. Multimodal User Interfaces 10(2): 163-172 (2016)
loading more results
failed to load more results, please try again later
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-05-06 11:29 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint