![](https://dblp.dagstuhl.de/img/logo.ua.320x120.png)
![](https://dblp.dagstuhl.de/img/dropdown.dark.16x16.png)
![](https://dblp.dagstuhl.de/img/peace.dark.16x16.png)
Остановите войну!
for scientists:
![search dblp search dblp](https://dblp.dagstuhl.de/img/search.dark.16x16.png)
![search dblp](https://dblp.dagstuhl.de/img/search.dark.16x16.png)
default search action
Search dblp
Full-text search
- > Home
Please enter a search query
- case-insensitive prefix search: default
e.g., sig matches "SIGIR" as well as "signal" - exact word search: append dollar sign ($) to word
e.g., graph$ matches "graph", but not "graphics" - boolean and: separate words by space
e.g., codd model - boolean or: connect words by pipe symbol (|)
e.g., graph|network
Update May 7, 2017: Please note that we had to disable the phrase search operator (.) and the boolean not operator (-) due to technical problems. For the time being, phrase search queries will yield regular prefix search result, and search terms preceded by a minus will be interpreted as regular (positive) search terms.
Author search results
no matches
Venue search results
no matches
Refine list
refine by author
- no options
- temporarily not available
refine by venue
- no options
- temporarily not available
refine by type
- no options
- temporarily not available
refine by access
- no options
- temporarily not available
refine by year
- no options
- temporarily not available
Publication search results
found 33 matches
- 2021
- Amal Abdulrahman
, Deborah Richards
, Hedieh Ranjbartabar
, Samuel Mascarenhas:
Verbal empathy and explanation to encourage behaviour change intention. J. Multimodal User Interfaces 15(2): 189-199 (2021) - Timothy W. Bickmore
, Everlyne Kimani, Ameneh Shamekhi, Prasanth Murali, Dhaval Parmar, Ha Trinh:
Virtual agents as supporting media for scientific presentations. J. Multimodal User Interfaces 15(2): 131-146 (2021) - Daniel P. Davison
, Frances M. Wijnen, Vicky Charisi, Jan van der Meij, Dennis Reidsma
, Vanessa Evers:
Words of encouragement: how praise delivered by a social robot changes children's mindset for learning. J. Multimodal User Interfaces 15(1): 61-76 (2021) - K. Renuga Devi, H. Hannah Inbarani
:
Neighborhood based decision theoretic rough set under dynamic granulation for BCI motor imagery classification. J. Multimodal User Interfaces 15(3): 301-321 (2021) - Hamdi Dibeklioglu, Elif Sürer, Albert Ali Salah, Thierry Dutoit:
Behavior and usability analysis for multimodal user interfaces. J. Multimodal User Interfaces 15(4): 335-336 (2021) - Metehan Doyran
, Arjan Schimmel, Pinar Baki, Kübra Ergin, Batikan Türkmen, Almila Akdag Salah
, Sander C. J. Bakkes, Heysem Kaya
, Ronald Poppe
, Albert Ali Salah:
MUMBAI: multi-person, multimodal board game affect and interaction analysis dataset. J. Multimodal User Interfaces 15(4): 373-391 (2021) - Lucile Dupuy
, Etienne de Sevin
, Hélène Cassoudesalle, Orlane Ballot, Patrick Dehail, Bruno Aouizerate, Emmanuel Cuny, Jean-Arthur Micoulaud-Franchi, Pierre Philip
:
Guidelines for the design of a virtual patient for psychiatric interview training. J. Multimodal User Interfaces 15(2): 99-107 (2021) - Feng Feng
, Puhong Li, Tony Stockman:
Exploring crossmodal perceptual enhancement and integration in a sequence-reproducing task with cognitive priming. J. Multimodal User Interfaces 15(1): 45-59 (2021) - Dersu Giritlioglu
, Burak Mandira
, Selim Firat Yilmaz
, Can Ufuk Ertenli
, Berhan Faruk Akgür, Merve Kiniklioglu, Asli Gül Kurt
, Emre Mutlu
, Seref Can Gürel
, Hamdi Dibeklioglu
:
Multimodal analysis of personality traits on videos of self-presentation and induced behavior. J. Multimodal User Interfaces 15(4): 337-358 (2021) - Felix G. Hamza-Lup
, Ioana R. Goldbach:
Multimodal, visuo-haptic games for abstract theory instruction: grabbing charged particles. J. Multimodal User Interfaces 15(1): 1-10 (2021) - Jun He
, Xiaocui Yu, Bo Sun, Lejun Yu:
Facial expression and action unit recognition augmented by their dependencies on graph convolutional networks. J. Multimodal User Interfaces 15(4): 429-440 (2021) - Rex Hsieh
, Hisashi Sato:
Evaluation of avatar and voice transform in programming e-learning lectures. J. Multimodal User Interfaces 15(2): 121-129 (2021) - Gökhan Ince
, Rabia Yorganci, Ahmet Özkul, Taha Berkay Duman, Hatice Köse
:
An audiovisual interface-based drumming system for multimodal human-robot interaction. J. Multimodal User Interfaces 15(4): 413-428 (2021) - Dimosthenis Kontogiorgos
, André Pereira, Joakim Gustafson:
Grounding behaviours with conversational interfaces: effects of embodiment and failures. J. Multimodal User Interfaces 15(2): 239-254 (2021) - Minha Lee
, Gale M. Lucas, Jonathan Gratch:
Comparing mind perception in strategic exchanges: human-agent negotiation, dictator and ultimatum games. J. Multimodal User Interfaces 15(2): 201-214 (2021) - Yi Li, Shreya Ghosh
, Jyoti Joshi:
PLAAN: Pain Level Assessment with Anomaly-detection based Network. J. Multimodal User Interfaces 15(4): 359-372 (2021) - Fotis Liarokapis, Sebastian von Mammen, Athanasios Vourvopoulos
:
Advanced multimodal interaction techniques and user interfaces for serious games and virtual environments. J. Multimodal User Interfaces 15(3): 255-256 (2021) - Ruixue Liu, Erin Walker, Leah Friedman, Catherine M. Arrington, Erin Treacy Solovey
:
fNIRS-based classification of mind-wandering with personalized window selection for multimodal learning interfaces. J. Multimodal User Interfaces 15(3): 257-272 (2021) - Usman Malik
, Mukesh Barange, Julien Saunier, Alexandre Pauchet:
A novel focus encoding scheme for addressee detection in multiparty interaction using machine learning algorithms. J. Multimodal User Interfaces 15(2): 1-14 (2021) - Johnathan Mell
, Markus Beissinger, Jonathan Gratch:
An expert-model and machine learning hybrid approach to predicting human-agent negotiation outcomes in varied data. J. Multimodal User Interfaces 15(2): 215-227 (2021) - Jose Mercado, Lizbeth Escobedo
, Monica Tentori
:
A BCI video game using neurofeedback improves the attention of children with autism. J. Multimodal User Interfaces 15(3): 273-281 (2021) - Lousin Moumdjian
, Thomas Vervust
, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys
, Marc Leman:
The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability. J. Multimodal User Interfaces 15(1): 77-83 (2021) - Lousin Moumdjian
, Thomas Vervust, Joren Six, Ivan Schepers, Micheline Lesaffre, Peter Feys
, Marc Leman:
Correction to: The Augmented Movement Platform For Embodied Learning (AMPEL): development and reliability. J. Multimodal User Interfaces 15(1): 85 (2021) - M. A. Viraj J. Muthugala
, P. H. D. Arjuna S. Srimal
, A. G. Buddhika P. Jayasekara
:
Improving robot's perception of uncertain spatial descriptors in navigational instructions by evaluating influential gesture notions. J. Multimodal User Interfaces 15(1): 11-24 (2021) - David Obremski
, Jean-Luc Lugrin, Philipp Schaper, Birgit Lugrin:
Non-native speaker perception of Intelligent Virtual Agents in two languages: the impact of amount and type of grammatical mistakes. J. Multimodal User Interfaces 15(2): 229-238 (2021) - Oscar Peña
, Franceli L. Cibrian
, Monica Tentori:
Circus in Motion: a multimodal exergame supporting vestibular therapy for children with autism. J. Multimodal User Interfaces 15(3): 283-299 (2021) - Delphine Potdevin
, Céline Clavel, Nicolas Sabouret:
Virtual intimacy in human-embodied conversational agent interactions: the influence of multimodality on its perception. J. Multimodal User Interfaces 15(1): 25-43 (2021) - Elif Sürer
, Mustafa Erkayaoglu
, Zeynep Nur Öztürk, Furkan Yücel
, Emin Alp Biyik
, Burak Altan, Büsra Senderin, Zeliha Oguz, Servet Gürer, H. Sebnem Düzgün
:
Developing a scenario-based video game generation framework for computer and virtual reality environments: a comparative usability study. J. Multimodal User Interfaces 15(4): 393-411 (2021) - Justyna Swidrak
, Grzegorz Pochwatko
, Andrea Insabato
:
Does an agent's touch always matter? Study on virtual Midas touch, masculinity, social status, and compliance in Polish men. J. Multimodal User Interfaces 15(2): 163-174 (2021) - Brianna J. Tomlinson
, Bruce N. Walker, Emily B. Moore
:
Identifying and evaluating conceptual representations for auditory-enhanced interactive physics simulations. J. Multimodal User Interfaces 15(3): 323-334 (2021)
skipping 3 more matches
loading more results
failed to load more results, please try again later
![](https://dblp.dagstuhl.de/img/cog.dark.24x24.png)
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from ,
, and
to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and
to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
retrieved on 2024-06-22 06:55 CEST from data curated by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint