


default search action
16th CMMR 2023
- Sølvi Ystad  , Richard Kronland-Martinet , Richard Kronland-Martinet , Tetsuro Kitahara, Keiji Hirata, Mitsuko Aramaki , Tetsuro Kitahara, Keiji Hirata, Mitsuko Aramaki : :
 Music and Sound Generation in the AI Era - 16th International Symposium, CMMR 2023, Tokyo, Japan, November 13-17, 2023, Revised Selected Papers. Lecture Notes in Computer Science 15236, Springer 2026, ISBN 978-3-032-02041-3
Artificial Intelligence, Cognitive Science and Skill Science for Sound and Music
- Tatsuya Daikoku: 
 Emergence of Creativity and Individuality in Music: Insights from Brain's Statistical Learning and Its Embodied Mechanisms. 3-14
- Max Graf, Mathieu Barthet: 
 Multimodal Hand Tracking for XR Musical Instruments Using Electromyography. 15-39
- Mio Matsuura, Ayanae Sasaki, Masaki Matsubara, Hiroki Watanabe, Yoshinari Takegawa, Keiji Hirata: 
 Comprehensive Understanding of Patterns of Skill Acquisition and Forgetting in Music Games: Does Musical Experience Accelerate Forgetting? 40-58
- Timothy Schmele, Eleonora De Filippi, Arijit Nandi, Alexandre Pereda-Baños, Adan Garriga: 
 Emotional Impact of Source Localization in Music Using Machine Learning and EEG - A Proof-of-Concept Study. 59-71
- Yasumasa Yamaguchi, Taku Kawada, Toru Nagahama, Tatsuya Horita: 
 A Quantitative Evaluation of a Musical Performance Support System Utilizing a Musical Sophistication Test Battery. 72-79
Music and Sound Generation: Emerging Approaches and Diverse Applications
- Jingjing Tang, Geraint A. Wiggins, György Fazekas: 
 Reconstructing Human Expressiveness in Piano Performances with a Transformer Network. 83-96
- Grégory Beller, Jacob Sello, Georg Hajdu, Thomas Görne: 
 Spatially Situated Media and Spatial Sampler XR-Genealogy and Organology of a Family of Gestural and Spatial Musical Instruments in Mixed Reality. 97-112
- Eleanor Row, Jingjing Tang, György Fazekas: 
 JAZZVAR: A Dataset of Variations Found within Solo Piano Performances of Jazz Standards for Music Overpainting. 113-126
- Marco Amerotti, Steve Benford, Bob L. T. Sturm, Craig Vear: 
 A Live Performance Rule System Informed by Irish Traditional Dance Music. 127-139
- Yoshitaka Tomiyama, Tetsuro Kitahara, Taro Masuda, Koki Kitaya, Yuya Matsumura, Ayari Takezawa, Tsuyoshi Odaira, Kanako Baba: 
 Benzaiten: A Non-Expert-Friendly Event of Automatic Melody Generation Contest. 140-148
- Yuqiang Li, Shengchen Li, György Fazekas: 
 Pitch Class and Octave-Based Pitch Embedding Training Strategies for Symbolic Music Generation. 149-167
- Damian Dziwis: 
 VERSNIZ Audiovisual Worldbuilding Through Live Coding as a Performance Practice in the Metaverse. 168-180
- Tomoo Kouzai, Tetsuro Kitahara: 
 An Audio-to-Audio Approach to Generate Bass Lines from Guitar's Chord Backing. 181-189
Computational Research on Music Evolution
- Eita Nakamura, Tim Eipert, Fabian C. Moss: 
 Historical Changes of Modes and Their Substructure Modeled as Pitch Distributions in Plainchant from the 1100 s to the 1500 s. 193-204
- Eita Nakamura: 
 Computational Analysis of Selection and Mutation Probabilities in the Evolution of Chord Progressions. 205-217
- Dongju Park, Juyong Park: 
 Bipartite Network Analysis of the Stylistic Evolution of Sample-Based Music. 218-226
- Halla Kim, Juyong Park: 
 On the Analysis of Voicing Novelty in Classical Piano Music. 227-235
Computational Musicology
- Christofer Julio, Feng-Hsu Lee, Li Su: 
 Interpretable Rule Learning and Evaluation of Early Twentieth-Century Music Styles. 239-251
- Yu-Fen Huang, Li Su: 
 Toward Empirical Analysis for Stylistic Expression in Piano Performance. 252-266
- Masatoshi Hamanaka, Keiji Hirata, Satoshi Tojo: 
 deepGTTM-IV: Deep Learning Based Time-Span Tree Analyzer of "A Generative Theory of Tonal Music". 267-275
- Chandan Misra, Swarup Chattopadhyay: 
 SANGEET: An XML Based Open Dataset for Research in Hindustani Sangeet. 276-283
- Matteo Bizzarri: 
 Music Analysis Through Mathematical Logic. 284-296
- Riku Takahashi, Risa Izu, Yoshinari Takegawa, Keiji Hirata: 
 Global Prediction of Time-Span Tree by Cloze Task. 297-309
Music Recognition and Creation Tools
- David Rizo, Jorge Calvo-Zaragoza, Juan C. Martinez-Sevilla, Adrian Rosello, Eliseo Fuentes-Martínez: 
 Design of a Music Recognition, Encoding, and Transcription Online Tool. 313-328
- Matthew Mccloskey, Gabrielle Curcio, Amulya Badineni, Kevin Mcgrath, Georgios Papamichail, Dimitris P. Papamichail: 
 Automated Arrangements of Multi-part Music for Sets of Monophonic Instruments. 329-336
- Emmanouil Karystinaios, Francesco Foscarin, Florent Jacquemard, Masahiko Sakai, Satoshi Tojo, Gerhard Widmer: 
 8 + 8 = 4: Formalizing Time Units to Handle Symbolic Music Durations. 337-348
- Hyon Kim, Xavier Serra: 
 DiffVel: Note-Level MIDI Velocity Estimation for Piano Performance by a Double Conditioned Diffusion Model. 349-361
Music Information Retrieval
- Geetika Arora, Keyur Choudhari, Ponnurangam Kumaraguru, Vinoo Alluri: 
 From Sunrise to Sunset: Investigating Diurnal Rhythmic Patterns in Music Listening Habits in India. 365-377
- Tiange Zhu, Danny Diamond, James McDermott, Raphaël Fournier-S'niehotta, Mathieu d'Aquin, Philippe Rigaux: 
 A Novel Local Alignment-Based Approach to Motif Extraction in Polyphonic Music. 378-389
- Ryusei Hayashi, Tetsuro Kitahara: 
 Predicting Audio Features of Background Music From Game Scenes. 390-401
- Tomoyasu Nakano, Momoka Sasaki, Mayuko Kishi, Masahiro Hamasaki, Masataka Goto, Yoshinori Hijikata: 
 A Music Exploration Interface Based on Vocal Timbre and Pitch in Popular Music. 402-418
- Le Cai, Sam Ferguson, Gengfa Fang, Hani Alshamrani: 
 Exploring Diverse Sounds: Identifying Outliers in a Music Corpus. 419-432
Audio Signal Processing and HCI in Music
- Rory Hoy, Doug Van Nort: 
 Co-creation and Deep Listening Between Humans and Machines in a Telematic Workshop Environment. 435-447
- Matthias Nowakowski, Aristotelis Hadjakos: 
 Estimating Interaction Time in Music Notation Editors. 448-460
- António Sá Pinto, Gilberto Bernardes, Matthew E. P. Davies: 
 Challenging Beat Tracking: Tackling Polyrhythm, Polymetre, and Polytempo with Human-in-the-Loop Adaptation. 461-479
- Jeremy Hyrkas: 
 Algorithms for Roughness Control Using Frequency Shifting and Attenuation of Partials in Audio. 480-493
- Sai Oshita, Tetsuro Kitahara: 
 NUFluteDB: Flute Sound Dataset with Appropriate and Inappropriate Blowing Styles. 494-503

manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.


 Google
Google Google Scholar
Google Scholar Semantic Scholar
Semantic Scholar Internet Archive Scholar
Internet Archive Scholar CiteSeerX
CiteSeerX ORCID
ORCID














