Stop the war!
Остановите войну!
for scientists:
default search action
13th Audio Mostly Conference 2018: Wrexham, UK
- Stuart Cunningham, Richard Picking:
Proceedings of the Audio Mostly 2018 on Sound in Immersion and Emotion, Wrexham, United Kingdom, September 12-14, 2018. ACM 2018, ISBN 978-1-4503-6609-0
Solving Problems with Sound
- Johan Fagerlönn, Anna Sirkka, Stefan Lindberg, Roger Johnsson:
Acoustic Vehicle Alerting Systems: Will they affect the acceptance of electric vehicles? 1:1-1:7 - Leya Breanna Baltaxe-Admony, Tom Hope, Kentaro Watanabe, Mircea Teodorescu, Sri Kurniawan, Takuichi Nishimura:
Exploring the Creation of Useful Interfaces for Music Therapists. 2:1-2:7 - Joshua Mycroft, Tony Stockman, Joshua D. Reiss:
A Prototype Mixer to Improve Cross-Modal Attention During Audio Mixing. 3:1-3:7 - Muhammad Abu ul Fazal, Sam Ferguson, Andrew Johnston:
Investigating Concurrent Speech-based Designs for Information Communication. 4:1-4:8 - Sara Adhitya, Daniel Scott:
The London Soundmap: Integrating sonic interaction design in the urban realm. 5:1-5:7
Sound in Emotion
- Andrew Godbout, Iulius A. T. Popa, Jeffrey E. Boyd:
Emotional Musification. 6:1-6:6 - Alice Baird, Emilia Parada-Cabaleiro, Cameron Fraser, Simone Hantke, Björn W. Schuller:
The Perceived Emotion of Isolated Synthetic Audio: The EmoSynth Dataset and Results. 7:1-7:8 - Marco Scirea, Peter W. Eklund, Julian Togelius, Sebastian Risi:
Evolving in-game mood-expressive music with MetaCompose. 8:1-8:8
Sound in Immersion
- Martin Ljungdahl Eriksson, Lena Pareto, Ricardo Atienza, Kjetil Falkenberg Hansen:
My Sound Space: An attentional shield for immersive redirection. 9:1-9:4 - David Ledoux, Robert Normandeau:
An Immersive Approach to 3D-Spatialized Music Composition: Tools and Pilot Survey. 10:1-10:4 - Victor E. González Sánchez, Agata Zelechowska, Alexander Refsum Jensenius:
Muscle activity response of the audience during an experimental music performance. 11:1-11:4
Making Sound and Music
- Athanasia Zlatintsi, Panagiotis Paraskevas Filntisis, Christos Garoufis, Antigoni Tsiami, Kosmas Kritsis, Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, Vassilis Katsouros, Petros Maragos:
A Web-based Real-Time Kinect Application for Gestural Interaction with Virtual Musical Instruments. 12:1-12:6 - Luca Turchet:
Smart Mandolin: autobiographical design, implementation, use cases, and lessons learned. 13:1-13:7 - Maria Kallionpää, Alan Chamberlain, Hans-Peter Gasselseder:
Under Construction: Contemporary Opera in the Crossroads Between New Aesthetics, Techniques, and Technologies. 14:1-14:8 - Junko Ichino, Hayato Nao:
Playing the Body: Making Music through Various Body Movements. 15:1-15:8 - Feng Su, Chris Joslin:
Procedurally-Generated Audio for Soft-Body Animations. 16:1-16:8
Sonic Reflections and Enquiries
- Luca Turchet:
Some reflections on the relation between augmented and smart musical instruments. 17:1-17:7 - Axel Berndt, Simon Waloschek, Aristotelis Hadjakos:
Meico: A Converter Framework for Bridging the Gap between Digital Music Editions and its Applications. 18:1-18:7 - Karen Collins, Ruth Dockwray:
Tamaglitchi: A Pilot Study of Anthropomorphism and Non-Verbal Sound. 19:1-19:6 - Oskari Koskela, Kai Tuuri:
Investigating metaphors of musical involvement: Immersion, flow, interaction and incorporation. 20:1-20:8
Sonic Analysis and Transformations
- Elio Toppano, Alessandro Toppano:
Staging sonic atmospheres as the new aesthetic work. 21:1-21:4 - Adam Lefaivre, John Z. Zhang:
Music Genre Classification: Genre-Specific Characterization and Pairwise Evaluation. 22:1-22:4 - Robert Kraemer, Cornelius Poepel:
On Transformations between Paradigms in Audio Programming. 23:1-23:4 - Nathan Renney, Benedict R. Gaster, Tom Mitchell:
Returning to the Fundamentals on Temperament (In Digital Systems). 24:1-24:4 - Steffan Owens, Stuart Cunningham:
Auditory Masking and the Precedence Effect in Studies of Musical Timekeeping. 25:1-25:4
Music and Interaction
- Federico Simonetta, Filippo Carnovalini, Nicola Orio, Antonio Rodà:
Symbolic Music Similarity through a Graph-Based Representation. 26:1-26:7 - Hadrien Foroughmand Aarabi, Geoffroy Peeters:
Music retiler: Using NMF2D source separation for audio mosaicing. 27:1-27:7 - Richard Ramchurn, Alan Chamberlain, Steve Benford:
Designing Musical Soundtracks for Brain Controlled Interface (BCI) Systems. 28:1-28:8 - Maximos A. Kaliakatsos-Papakostas, Aggelos Gkiokas, Vassilis Katsouros:
Interactive Control of Explicit Musical Features in Generative LSTM-based Systems. 29:1-29:7 - Anna Xambó, Johan Pauwels, Gerard Roma, Mathieu Barthet, György Fazekas:
Jam with Jamendo: Querying a Large Music Collection by Chords from a Learner's Perspective. 30:1-30:7
Poster Papers
- Oliver Halstead, Jack Davenport, Ruben Dejaegere:
Activating Archives: Combining Elements of Japanese Culture to Create a New and Playful Musical Experience. 31:1-31:4 - Alan Chamberlain:
Surfing with Sound: An Ethnography of the Art of No-Input Mixing: Starting to Understand Risk, Control and Feedback in Musical Performance. 32:1-32:5 - Steven Nicholls, Stuart Cunningham, Richard Picking:
Collaborative Artificial Intelligence in Music Production. 33:1-33:4 - Nikolaos Vryzas, Maria Matsiola, Rigas Kotsakis, Charalampos Dimoulas, George Kalliris:
Subjective Evaluation of a Speech Emotion Recognition Interaction Framework. 34:1-34:7 - Sanjay Majumder, Benjamin D. Smith:
Real time Pattern Based Melodic Query for Music Continuation System. 35:1-35:5 - Callum Forsyth:
A Method for Virtual Acoustic Auralisation in VR. 36:1-36:3 - Stuart Cunningham, Jonathan Weinel, Richard Picking:
High-Level Analysis of Audio Features for Identifying Emotional Valence in Human Singing. 37:1-37:4 - Lars Engeln, Natalie Hube, Rainer Groh:
Immersive VisualAudioDesign: Spectral Editing in VR. 38:1-38:4
Workshop 1: The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces
- Alan Chamberlain, Mads Bødker, Maria Kallionpää, Richard Ramchurn, David De Roure, Steve Benford, Alan J. Dix:
The Design of Future Music Technologies: 'Sounding Out' AI, Immersive Experiences & Brain Controlled Interfaces. 39:1-39:2 - Alan Chamberlain, Steve Benford, Alan J. Dix:
Re-Thinking Immersive Technologies for Audiences of the Future. 40:1-40:3 - David De Roure, Pip Willcox, Alan Chamberlain:
Lovelace's Legacy: Creative Algorithmic Interventions for Live Performance. 41:1-41:5 - Juan Pablo Martinez-Avila, Adrian Hazzard, Alan Chamberlain, Chris Greenhalgh, Steve Benford:
An AI-Based Design Framework to Support Musicians' Practices. 42:1-42:5
Workshop 2: MozziByte: Making Things Purr Growl and Sing
- Stephen Barrass:
MozziByte Workshop: Making Things Purr Growl and Sing. 43:1-43:2
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.