


default search action
AVSP 2017: Stockholm, Sweden
- Slim Ouni, Chris Davis, Alexandra Jesse, Jonas Beskow:

14th International Conference on Auditory-Visual Speech Processing, AVSP 2017, Stockholm, Sweden, August 25-26, 2017. ISCA 2017
Special Session
- Denis Burnham:

Eric Vatikiotis-Bateson and the Birth of AVSP. 1-5
Gaze and Handedness
- Jessie S. Nixon, Catherine T. Best:

Acoustic cue variability affects eye movement behaviour during non-native speech perception: a GAMM model. 6-11 - Chris Davis, Jeesun Kim, Outi Tuomainen, Valérie Hazan:

The effect of age and hearing loss on partner-directed gaze in a communicative task. 12-15 - Eva Maria Nunnemann, Kirsten Bergmann, Helene Kreysa, Pia Knoeferle

:
Referential Gaze Makes a Difference in Spoken Language Comprehension: Human Speaker vs. Virtual Agent Listener Gaze. 16-20 - Katarzyna Stoltmann, Susanne Fuchs:

The influence of handedness and pointing direction on deictic gestures and speech interaction: Evidence from motion capture data on Polish counting-out rhymes. 21-25 - Sandhya Vinay, Dawn M. Behne:

The Influence of Familial Sinistrality on Audiovisual Speech Perception. 26-29
AV by machines
- Christian Kroos, Rikke L. Bundgaard-Nielsen

, Catherine T. Best, Mark D. Plumbley
:
Using deep neural networks to estimate tongue movements from speech face motion. 30-35 - Stavros Petridis, Yujiang Wang, Zuwei Li, Maja Pantic:

End-to-End Audiovisual Fusion with LSTMs. 36-40 - Danny Websdale, Ben Milner:

Using visual speech information and perceptually motivated loss functions for binary mask estimation. 41-46 - Marina Zimmermann, Mostafa Mehdipour-Ghazi, Hazim Kemal Ekenel, Jean-Philippe Thiran

:
Combining Multiple Views for Visual Speech Recognition. 47-52 - Slim Ouni, Sara Dahmani, Vincent Colotte:

On the quality of an expressive audiovisual corpus: a case study of acted speech. 53-57 - Ailbhe Cullen, Naomi Harte

:
Thin slicing to predict viewer impressions of TED Talks. 58-63
Lip reading by machines
- Alexandros Koumparoulis, Gerasimos Potamianos, Youssef Mroueh, Steven J. Rennie:

Exploring ROI size in deep learning based lipreading. 64-69 - George Sterpu, Naomi Harte

:
Towards Lipreading Sentences with Active Appearance Models. 70-75 - Satoshi Tamura, Koichi Miyazaki, Satoru Hayamizu:

Lipreading using deep bottleneck features for optical and depth images. 76-77 - Li Liu

, Gang Feng, Denis Beautemps:
Inner Lips Parameter Estimation based on Adaptive Ellipse Model. 78-83
Prosody and Timing
- Pascal Barone, Mathieu Marx, Anne Lasfargues-Delannoy:

Processing of visuo-auditory prosodic information in cochlear-implanted patients deaf patients. 84-88 - Gilbert Ambrazaitis, David House:

Acoustic features of multimodal prominences: Do visual beat gestures affect verbal pitch accent realization? 89-94 - Vincent Aubanel, Cassandra Masters, Jeesun Kim, Chris Davis:

Contribution of visual rhythmic information to speech perception in noise. 95-99 - Dawn M. Behne, Marzieh Sorati, Magnus Alm:

Perceived Audiovisual Simultaneity in Speech by Musicians and Nonmusicians: Preliminary Behavioral and Event-Related Potential (ERP) Findings. 100-104
Emotion & Attitudes
- Hisako W. Yamamoto, Misako Kawahara, Akihiro Tanaka:

The developmental path of multisensory perception of emotion and phoneme in Japanese speakers. 105-108 - Misako Kawahara, Disa Sauter, Akihiro Tanaka:

Impact of Culture on the Development of Multisensory Emotion Perception. 109-114 - Marina Kawase, Ikuma Adachi, Akihiro Tanaka:

Multisensory Perception of Emotion for Human and Chimpanzee Expressions by Humans. 115-118 - Hansjörg Mixdorff, Angelika Hönemann, Albert Rilliard, Tan Lee

, Matthew K. H. Ma:
Cross-Language Perception of Audio-visual Attitudinal Expressions. 119-124 - Angelika Hoenemann, Petra Wagner:

Facial activity of attitudinal speech in German. 125-130

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


Google
Google Scholar
Semantic Scholar
Internet Archive Scholar
CiteSeerX
ORCID














