


Остановите войну!
for scientists:
Shigeki Sagayama
Person information

- affiliation: Meiji University, Tokyo, Japan
Refine list

refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
showing all ?? records
2020 – today
- 2022
- [c174]Keiko Ochi, Nobutaka Ono, Keiho Owada, Miho Kuroda, Shigeki Sagayama, Hidenori Yamasue:
Entrainment Analysis for Assessment of Autistic Speech Prosody Using Bottleneck Features of Deep Neural Network. ICASSP 2022: 8492-8496 - 2021
- [c173]Keiko Ochi, Masaki Kojima, Keiho Owada, Nobutaka Ono, Shigeki Sagayama, Hidenori Yamasue:
Pitch and Volume Stability in the Communicative Response of Adults with Autism. APSIPA ASC 2021: 428-432 - [c172]Yasuyuki Saito, Honoka Fujii, Shigeki Sagayama:
Semi-automatic music piece creation based on impression words extracted from object and background in color image. GCCE 2021: 268-272 - 2020
- [j37]Junya Koguchi, Shinnosuke Takamichi, Masanori Morise, Hiroshi Saruwatari, Shigeki Sagayama:
DNN-Based Full-Band Speech Synthesis Using GMM Approximation of Spectral Envelope. IEICE Trans. Inf. Syst. 103-D(12): 2673-2681 (2020) - [j36]Christoph M. Wilk, Shigeki Sagayama:
A Parameterized Harmony Model for Automatic Music Completion. J. Inf. Process. 28: 258-266 (2020) - [c171]Yasuyuki Saito, Yasuji Sakai, Yuu Igarashi, Suguru Agata, Eita Nakamura, Shigeki Sagayama:
Music Recreation in Nursing Home using Automatic Music Accompaniment System and Score of VLN. LifeTech 2020: 127-131
2010 – 2019
- 2019
- [j35]Christoph M. Wilk, Shigeki Sagayama:
Automatic Music Completion Based on Joint Optimization of Harmony Progression and Voicing. J. Inf. Process. 27: 693-700 (2019) - [c170]Christoph M. Wilk, Shigeki Sagayama:
Polyphonic Voicing Optimization for Automatic Music Completion. APSIPA 2019: 375-382 - [c169]Matsuto Hori, Christoph M. Wilk, Shigeki Sagayama:
Piano Practice Evaluation and Visualization by HMM for Arbitrary Jumps and Mistakes. CISS 2019: 1-5 - [c168]You Li, Christoph M. Wilk, Takeshi Hori, Shigeki Sagayama:
Automatic Piano Reduction of Orchestral Music Based on Musical Entropy. CISS 2019: 1-5 - [c167]Daiki Mitsumoto, Takeshi Hori, Shigeki Sagayama, Hidenori Yamasue, Keiho Owada, Masaki Kojima, Keiko Ochi, Nobutaka Ono
:
Autism Spectrum Disorder Discrimination Based on Voice Activities Related to Fillers and Laughter. CISS 2019: 1-6 - 2018
- [c166]Christoph M. Wilk, Shigeki Sagayama:
Harmony and Voicing Interpolation for Automatic Music Composition Assistance. APSIPA 2018: 89-98 - [c165]Takeshi Hori, Kazuyuki Nakamura, Shigeki Sagayama:
Multiresolutional Hierarchical Bayesian NMF for Detailed Audio Analysis of Music Performances. APSIPA 2018: 1626-1635 - [c164]Takuya Takahashi, Takeshi Hori, Christoph M. Wilk, Shigeki Sagayama:
Semi-Supervised NMF in the chroma Domain Applied to Music Harmony Estimation. APSIPA 2018: 1636-1641 - [c163]Junya Koguchi, Shigeki Sagayama:
Composite Wavelet Model for Stability-Oriented Speech Synthesis from Cepstral Features. APSIPA 2018: 1697-1701 - 2017
- [j34]Eita Nakamura, Kazuyoshi Yoshii
, Shigeki Sagayama:
Rhythm Transcription of Polyphonic Piano Music Based on Merged-Output HMM for Multiple Voices. IEEE ACM Trans. Audio Speech Lang. Process. 25(4): 794-806 (2017) - [c162]Takeshi Hori, Kazuyuki Nakamura, Shigeki Sagayama:
Music chord recognition from audio data using bidirectional encoder-decoder LSTMs. APSIPA 2017: 1312-1315 - [c161]Gen Hori, Shigeki Sagayama:
Variant of Viterbi algorithm based on p-Norm. DSP 2017: 1-5 - [i4]Eita Nakamura, Kazuyoshi Yoshii, Shigeki Sagayama:
Rhythm Transcription of Polyphonic Piano Music Based on Merged-Output HMM for Multiple Voices. CoRR abs/1701.08343 (2017) - 2016
- [j33]Hideyuki Tachibana
, Yuu Mizuno, Nobutaka Ono
, Shigeki Sagayama:
A Real-time Audio-to-audio Karaoke Generation System for Monaural Recordings Based on Singing Voice Suppression and Key Conversion Techniques. J. Inf. Process. 24(3): 470-482 (2016) - [j32]Tomohiko Nakamura
, Eita Nakamura, Shigeki Sagayama:
Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips. IEEE ACM Trans. Audio Speech Lang. Process. 24(2): 329-339 (2016) - [c160]Gen Hori, Shigeki Sagayama:
Minimax Viterbi Algorithm for HMM-Based Guitar Fingering Decision. ISMIR 2016: 448-453 - [c159]Yasuhiro Hamada, Nobutaka Ono, Shigeki Sagayama:
Non-filter waveform generation from cepstrum using spectral phase reconstruction. SSW 2016: 27-31 - 2015
- [j31]Nobutaka Ito, Emmanuel Vincent, Tomohiro Nakatani, Nobutaka Ono, Shoko Araki, Shigeki Sagayama:
Blind Suppression of Nonstationary Diffuse Acoustic Noise Based on Spatial Covariance Matrix Decomposition. J. Signal Process. Syst. 79(2): 145-157 (2015) - [c158]Eita Nakamura, Shigeki Sagayama:
Automatic Piano Reduction from Ensemble Scores Based on Merged-Output Hidden Markov Model. ICMC 2015 - [c157]Eita Nakamura, Philippe Cuvillier, Arshia Cont, Nobutaka Ono, Shigeki Sagayama:
Autoregressive Hidden Semi-Markov Model of Symbolic Music Performance for Score Following. ISMIR 2015: 392-398 - [i3]Tomohiko Nakamura, Eita Nakamura, Shigeki Sagayama:
Real-Time Audio-to-Score Alignment of Music Performances Containing Errors and Arbitrary Repeats and Skips. CoRR abs/1512.07748 (2015) - 2014
- [j30]Hideyuki Tachibana
, Nobutaka Ono
, Shigeki Sagayama:
Singing Voice Enhancement in Monaural Music Signals Based on Two-stage Harmonic/Percussive Sound Separation on Multiple Resolution Spectrograms. IEEE ACM Trans. Audio Speech Lang. Process. 22(1): 228-237 (2014) - [j29]Hideyuki Tachibana
, Nobutaka Ono
, Hirokazu Kameoka, Shigeki Sagayama:
Harmonic/percussive sound separation based on anisotropic smoothness of spectrograms. IEEE ACM Trans. Audio Speech Lang. Process. 22(12): 2059-2073 (2014) - [c156]Toru Taniguchi, Nobutaka Ono
, Akinori Kawamura, Shigeki Sagayama:
An auxiliary-function approach to online independent vector analysis for real-time blind source separation. HSCMA 2014: 107-111 - [c155]Gen Hori, Shigeki Sagayama:
HMM-Based Automatic Arrangement for Guitars with Transposition and its Implementation. ICMC 2014 - [c154]Eita Nakamura, Nobutaka Ono, Yasuyuki Saito, Shigeki Sagayama:
Merged-Output Hidden Markov Model for Score Following of MIDI Performance with Ornaments, Desynchronized Voices, Repeats and Skips. ICMC 2014 - [c153]Eita Nakamura, Nobutaka Ono, Shigeki Sagayama:
Merged-Output HMM for Piano Fingering of Both Hands. ISMIR 2014: 531-536 - [i2]Eita Nakamura, Tomohiko Nakamura, Yasuyuki Saito, Nobutaka Ono, Shigeki Sagayama:
Outer-Product Hidden Markov Model and Polyphonic MIDI Score Following. CoRR abs/1404.2313 (2014) - [i1]Eita Nakamura, Nobutaka Ono, Shigeki Sagayama, Kenji Watanabe:
A Stochastic Temporal Model of Polyphonic MIDI Performance with Ornaments. CoRR abs/1404.2314 (2014) - 2013
- [j28]Hirokazu Kameoka, Misa Sato, Takuma Ono, Nobutaka Ono
, Shigeki Sagayama:
Bayesian Nonparametric Approach to Blind Separation of Infinitely Many Sparse Sources. IEICE Trans. Fundam. Electron. Commun. Comput. Sci. 96-A(10): 1928-1937 (2013) - [j27]Gen Hori, Hirokazu Kameoka, Shigeki Sagayama:
Input-Output HMM Applied to Automatic Arrangement for Guitars. J. Inf. Process. 21(2): 264-271 (2013) - [j26]Stanislaw Andrzej Raczynski, Emmanuel Vincent, Shigeki Sagayama:
Dynamic Bayesian Networks for Symbolic Polyphonic Pitch Modeling. IEEE Trans. Speech Audio Process. 21(9): 1830-1840 (2013) - [c152]Masato Tsuchiya, Kazuki Ochiai, Hirokazu Kameoka, Shigeki Sagayama:
Probabilistic model of two-dimensional rhythm tree structure representation for automatic transcription of polyphonic MIDI signals. APSIPA 2013: 1-6 - [c151]Tatsuma Ishihara, Hirokazu Kameoka, Kota Yoshizato, Daisuke Saito, Shigeki Sagayama:
Probabilistic speech F0 contour model incorporating statistical vocabulary model of phrase-accent command sequence. INTERSPEECH 2013: 1017-1021 - [c150]Hirokazu Kameoka, Kota Yoshizato, Tatsuma Ishihara, Yasunori Ohishi, Kunio Kashino, Shigeki Sagayama:
Generative modeling of speech F0 contours. INTERSPEECH 2013: 1826-1830 - [c149]Nobutaka Ito, Emmanuel Vincent, Nobutaka Ono, Shigeki Sagayama:
General algorithms for estimating spectrogram and transfer functions of target signal for blind suppression of diffuse noise. MLSP 2013: 1-6 - [c148]Nobukatsu Hojo, Kota Yoshizato, Hirokazu Kameoka, Daisuke Saito, Shigeki Sagayama:
Text-to-speech synthesizer based on combination of composite wavelet and hidden Markov models. SSW 2013: 129-134 - [p3]Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, Shigeki Sagayama:
Statistical Approach to Automatic Expressive Rendition of Polyphonic Piano Music. Guide to Computing for Expressive Music Performance 2013: 145-179 - 2012
- [j25]Dong Yu, Geoffrey E. Hinton, Nelson Morgan, Jen-Tzung Chien
, Shigeki Sagayama:
Introduction to the Special Section on Deep Learning for Speech and Language Processing. IEEE Trans. Speech Audio Process. 20(1): 4-6 (2012) - [c147]Kazuki Ochiai, Hirokazu Kameoka, Shigeki Sagayama:
Explicit beat structure modeling for non-negative matrix factorization-based multipitch analysis. ICASSP 2012: 133-136 - [c146]Hideyuki Tachibana
, Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama:
Comparative evaluations of various harmonic/percussive sound separation algorithms based on anisotropic continuity of spectrogram. ICASSP 2012: 465-468 - [c145]Takuma Ono, Nobutaka Ono
, Shigeki Sagayama:
User-guided independent vector analysis with source activity tuning. ICASSP 2012: 2417-2420 - [c144]Miquel Espi, Masakiyo Fujimoto, Daisuke Saito, Nobutaka Ono
, Shigeki Sagayama:
A tandem connectionist model using combination of multi-scale spectro-temporal features for acoustic event detection. ICASSP 2012: 4293-4296 - [c143]Hirokazu Kameoka, Masahiro Nakano, Kazuki Ochiai, Yutaka Imoto, Kunio Kashino, Shigeki Sagayama:
Constrained and regularized variants of non-negative matrix factorization incorporating music-specific constraints. ICASSP 2012: 5365-5368 - [c142]Satoru Fukayama, Daisuke Saito, Shigeki Sagayama:
Assistance for Novice Users on Creating Songs from Japanese Lyrics. ICMC 2012 - [c141]Kota Yoshizato, Hirokazu Kameoka, Daisuke Saito, Shigeki Sagayama:
Hidden Markov Convolutive Mixture Model for Pitch Contour Analysis of Speech. INTERSPEECH 2012: 390-393 - [c140]Shigeki Matsuda, Naoya Ito, Kosuke Tsujino, Hideki Kashioka, Shigeki Sagayama:
Speaker-Dependent Voice Activity Detection Robust to Background Speech Noise. INTERSPEECH 2012: 2626-2629 - [c139]Takayoshi Oshima, Yutaka Kamamoto, Takehiro Moriya, Nobutaka Ono
, Shigeki Sagayama:
Variable-length coding of ACELP gain using Entropy-Constrained VQ. ISCIT 2012: 105-109 - [c138]Hirokazu Kameoka, Kazuki Ochiai, Masahiro Nakano, Masato Tsuchiya, Shigeki Sagayama:
Context-free 2D Tree Structure Model of Musical Notes for Bayesian Modeling of Polyphonic Spectrograms. ISMIR 2012: 307-312 - [c137]Hirokazu Kameoka, Misa Sato, Takuma Ono, Nobutaka Ono, Shigeki Sagayama:
Blind Separation of Infinitely Many Sparse Sources. IWAENC 2012 - 2011
- [j24]Meinard Müller
, Daniel P. W. Ellis, Anssi Klapuri, Gaël Richard, Shigeki Sagayama:
Introduction to the Special Issue on Music Signal Processing. IEEE J. Sel. Top. Signal Process. 5(6): 1085-1087 (2011) - [j23]Jun Wu, Emmanuel Vincent, Stanislaw Andrzej Raczynski, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Polyphonic Pitch Estimation and Instrument Identification by Joint Modeling of Sustained and Attack Sounds. IEEE J. Sel. Top. Signal Process. 5(6): 1124-1132 (2011) - [j22]Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono
, Alain de Cheveigné
, Shigeki Sagayama:
Computational auditory induction as a missing-data model-fitting problem with Bregman divergence. Speech Commun. 53(5): 658-676 (2011) - [j21]Nobutaka Ito, Hikaru Shimizu, Nobutaka Ono, Shigeki Sagayama:
Diffuse Noise Suppression Using Crystal-Shaped Microphone Arrays. IEEE Trans. Speech Audio Process. 19(7): 2101-2110 (2011) - [c136]Jun Wu, Shigeki Sagayama:
Musical Instrument Identification Based on New Boosting Algorithm with Probabilistic Decisions. CMMR/FRSM 2011: 66-78 - [c135]Emmanuel Dupoux, Guillaume Beraud-Sudreau, Shigeki Sagayama:
Templatic features for modeling phoneme acquisition. CogSci 2011 - [c134]Jun Wu, Emmanuel Vincent, Stanislaw Andrzej Raczynski, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Multipitch estimation by joint modeling of harmonic and transient sounds. ICASSP 2011: 25-28 - [c133]Ngoc Q. K. Duong, Hideyuki Tachibana
, Emmanuel Vincent, Nobutaka Ono
, Rémi Gribonval, Shigeki Sagayama:
Multichannel harmonic and percussive component separation by joint modeling of spatial and spectral continuity. ICASSP 2011: 205-208 - [c132]Masahiro Nakano, Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama:
Infinite-state spectrum model for music signal analysis. ICASSP 2011: 1972-1975 - [c131]Takuho Nakano, Akisato Kimura, Hirokazu Kameoka, Shigeki Miyabe, Shigeki Sagayama, Nobutaka Ono
, Kunio Kashino, Takuya Nishimoto:
Automatic video annotation via Hierarchical Topic Trajectory Model considering cross-modal correlations. ICASSP 2011: 2380-2383 - [c130]Tomoyuki Hamamura, Bunpei Irie, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Concurrent Optimization of Context Clustering and GMM for Offline Handwritten Word Recognition Using HMM. ICDAR 2011: 523-527 - [c129]Miquel Espi, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono, Shigeki Sagayama:
Using Spectral Fluctuation of Speech in Multi-Feature HMM-Based Voice Activity Detection. INTERSPEECH 2011: 2613-2616 - [c128]Tae Hun Kim, Satoru Fukayama, Takuya Nishimoto, Shigeki Sagayama:
Polyhymnia: An Automatic Piano Performance System with Statistical Modeling of Polyphonic Expression and Musical Symbol Interpretation. NIME 2011: 96-99 - [c127]Masahiro Nakano, Jonathan Le Roux, Hirokazu Kameoka, Tomohiko Nakamura
, Nobutaka Ono
, Shigeki Sagayama:
Bayesian nonparametric spectrogram modeling based on infinite factorial infinite hidden Markov model. WASPAA 2011: 325-328 - 2010
- [j20]Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama:
Speech Spectrum Modeling for Joint Estimation of Spectral Envelope and Fundamental Frequency. IEEE Trans. Speech Audio Process. 18(6): 1507-1516 (2010) - [c126]Keisuke Hasegawa, Nobutaka Ono
, Shigeki Miyabe, Shigeki Sagayama:
Blind Estimation of Locations and Time Offsets for Distributed Recording Devices. LVA/ICA 2010: 57-64 - [c125]Nobutaka Ito, Emmanuel Vincent, Nobutaka Ono, Rémi Gribonval, Shigeki Sagayama:
Crystal-MUSIC: Accurate Localization of Multiple Sources in Diffuse Noise Environments Using Crystal-Shaped Microphone Arrays. LVA/ICA 2010: 81-88 - [c124]Jonathan Le Roux, Emmanuel Vincent, Yuu Mizuno, Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama:
Consistent Wiener Filtering: Generalized Time-Frequency Masking Respecting Spectrogram Consistency. LVA/ICA 2010: 89-96 - [c123]Masahiro Nakano, Jonathan Le Roux, Hirokazu Kameoka, Yu Kitano, Nobutaka Ono
, Shigeki Sagayama:
Nonnegative Matrix Factorization with Markov-Chained Bases for Modeling Time-Varying Patterns in Music Spectrograms. LVA/ICA 2010: 149-156 - [c122]Emiru Tsunoo, Taichi Akase, Nobutaka Ono
, Shigeki Sagayama:
Music mood classification by rhythm and bass-line unit pattern analysis. ICASSP 2010: 265-268 - [c121]Hideyuki Tachibana
, Takuma Ono, Nobutaka Ono
, Shigeki Sagayama:
Melody line estimation in homophonic music audio signals based on temporal-variability of melodic source. ICASSP 2010: 425-428 - [c120]Nobutaka Ono
, Shigeki Sagayama:
R-means localization: A simple iterative algorithm for range-difference-based source localization. ICASSP 2010: 2718-2721 - [c119]Nobutaka Ito, Nobutaka Ono, Emmanuel Vincent, Shigeki Sagayama:
Designing the Wiener post-filter for diffuse noise suppression using imaginary parts of inter-channel cross-spectra. ICASSP 2010: 2818-2821 - [c118]Yu Kitano, Hirokazu Kameoka, Yosuke Izumi, Nobutaka Ono
, Shigeki Sagayama:
A sparse component model of source signals and its application to blind source separation. ICASSP 2010: 4122-4125 - [c117]Yushi Ueda, Yuuki Uchiyama, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
HMM-based approach for automatic chord detection using refined acoustic features. ICASSP 2010: 5518-5521 - [c116]Jun Wu, Yu Kitano, Stanislaw Andrzej Raczynski, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono, Shigeki Sagayama:
Musical instrument identification based on harmonic temporal timbre features. SAPA@INTERSPEECH 2010: 7-12 - [c115]Halfdan Rump, Shigeki Miyabe, Emiru Tsunoo, Nobutaka Ono, Shigeki Sagayama:
Autoregressive MFCC Models for Genre Classification Improved by Harmonic-percussion Separation. ISMIR 2010: 87-92 - [c114]Stanislaw Andrzej Raczynski, Emmanuel Vincent, Frédéric Bimbot, Shigeki Sagayama:
Multiple Pitch Transcription using DBN-based Musicological Models. ISMIR 2010: 363-368 - [c113]Kazuma Murao, Masahiro Nakano, Yu Kitano, Nobutaka Ono, Shigeki Sagayama:
Monophonic Instrument Sound Segregation by Clustering NMF Components Based on Basis Similarity and Gain Disjointness. ISMIR 2010: 375-380 - [c112]Emmanuel Vincent, Stanislaw Andrzej Raczynski, Nobutaka Ono, Shigeki Sagayama:
A Roadmap Towards Versatile MIR. ISMIR 2010: 662-664 - [c111]Jun Wu, Yu Kitano, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Flexible Harmonic Temporal Structure for Modeling Musical Instrument. ICEC 2010: 416-418 - [c110]Miquel Espi, Shigeki Miyabe, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Analysis on speech characteristics for robust voice activity detection. SLT 2010: 151-156 - [c109]Takuho Nakano, Shigeki Sagayama, Nobutaka Ono, Akisato Kimura, Hirokazu Kameoka, Kunio Kashino:
SEMANTIC INDEXING AND KNOWN ITEM SEARCH BASED ON A UNIFIED MODEL WITH TOPIC TRANSITION REPRESENTATION. TRECVID 2010 - [p2]Nobutaka Ono
, Kenichi Miyamoto, Hirokazu Kameoka, Jonathan Le Roux, Yuuki Uchiyama, Emiru Tsunoo, Takuya Nishimoto, Shigeki Sagayama:
Harmonic and Percussive Sound Separation and Its Application to MIR-Related Tasks. Advances in Music Information Retrieval 2010: 213-236
2000 – 2009
- 2009
- [c108]Stanislaw Andrzej Raczynski, Nobutaka Ono, Shigeki Sagayama:
Extending Nonnegative Matrix Factorization - A discussion in the context of multiple frequency estimation of musical signals. EUSIPCO 2009: 934-938 - [c107]Emiru Tsunoo, Nobutaka Ono, Shigeki Sagayama:
Rhythm map: Extraction of unit rhythmic patterns and analysis of rhythmic structure from music acoustic signals. ICASSP 2009: 185-188 - [c106]Hirokazu Kameoka, Nobutaka Ono
, Kunio Kashino, Shigeki Sagayama:
Complex NMF: A new sparse representation for acoustic signals. ICASSP 2009: 3437-3440 - [c105]Emiru Tsunoo, George Tzanetakis
, Nobutaka Ono
, Shigeki Sagayama:
Audio genre classification using percussive pattern clustering combined with timbral features. ICME 2009: 382-385 - [c104]Yosuke Izumi, Kenta Nishiki, Shinji Watanabe, Takuya Nishimoto, Nobutaka Ono, Shigeki Sagayama:
Stereo-input speech recognition using sparseness-based time-frequency masking in a reverberant environment. INTERSPEECH 2009: 1955-1958 - [c103]Emiru Tsunoo, Nobutaka Ono, Shigeki Sagayama:
Musical Bass-Line Pattern Clustering and Its Application to Audio Genre Classification. ISMIR 2009: 219-224 - [c102]Jeremy Reed, Yushi Ueda, Sabato Marco Siniscalchi, Yuuki Uchiyama, Shigeki Sagayama, Chin-Hui Lee:
Minimum Classification Error Training to Improve Isolated Chord Recognition. ISMIR 2009: 609-614 - [c101]Satoru Fukayama, Kei Nakatsuma, Shinji Sako, Yuichiro Yonebayashi, Tae Hun Kim, Si Wei Qin, Takuho Nakano, Takuya Nishimoto, Shigeki Sagayama:
Orpheus: Automatic Composition System Considering Prosody of Japanese Lyrics. ICEC 2009: 309-310 - [c100]Stanislaw Andrzej Raczynski, Nobutaka Ono
, Shigeki Sagayama:
Note detection with dynamic bayesian networks as a postanalysis step for NMF-based multiple pitch estimation techniques. WASPAA 2009: 49-52 - [c99]Nobutaka Ono, Hitoshi Kohno, Nobutaka Ito, Shigeki Sagayama:
Blind alignment of asynchronously recorded signals for distributed microphone array. WASPAA 2009: 161-164 - 2008
- [j19]Nobutaka Ono, Souichiro Fukamachi, Shigeki Sagayama:
Sound Source Localization with Front-Back Judgement by Two Microphones Asymmetrically Mounted on a Sphere. J. Multim. 3(3): 1-9 (2008) - [j18]Shoichiro Saito, Hirokazu Kameoka, Keigo Takahashi, Takuya Nishimoto, Shigeki Sagayama:
Specmurt Analysis of Polyphonic Music Signals. IEEE Trans. Speech Audio Process. 16(3): 639-650 (2008) - [c98]Nobutaka Ono, Kenichi Miyamoto, Jonathan Le Roux, Hirokazu Kameoka, Shigeki Sagayama:
Separation of a monaural audio signal into harmonic/percussive components by complementary diffusion on spectrogram. EUSIPCO 2008: 1-4 - [c97]Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama:
Auxiliary function approach to parameter estimation of constrained sinusoidal model for monaural speech separation. ICASSP 2008: 29-32 - [c96]Kenichi Miyamoto, Hirokazu Kameoka, Takuya Nishimoto, Nobutaka Ono
, Shigeki Sagayama:
Harmonic-Temporal-Timbral Clustering (HTTC) for the analysis of multi-instrument polyphonic music signals. ICASSP 2008: 113-116 - [c95]Nobutaka Ito, Nobutaka Ono, Shigeki Sagayama:
A blind noise decorrelation approach with crystal arrays on designing post-filters for diffuse noise suppression. ICASSP 2008: 317-320 - [c94]Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono
, Shigeki Sagayama, Alain de Cheveigné
:
Modulation analysis of speech through orthogonal FIR filterbank optimization. ICASSP 2008: 4189-4192 - [c93]Ikumi Ota, Ryo Yamamoto, Takuya Nishimoto, Shigeki Sagayama:
On-line handwritten Kanji string recognition based on grammar description of character structures. ICPR 2008: 1-5 - [c92]Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono, Alain de Cheveigné, Shigeki Sagayama:
Computational auditory induction by missing-data non-negative matrix factorization. SAPA@INTERSPEECH 2008: 1-6 - [c91]Jonathan Le Roux, Nobutaka Ono, Shigeki Sagayama:
Explicit consistency constraints for STFT spectrograms and their application to phase reconstruction. SAPA@INTERSPEECH 2008: 23-28 - [c90]Nobutaka Ono, Kenichi Miyamoto, Hirokazu Kameoka, Shigeki Sagayama:
A Real-time Equalizer of Harmonic and Percussive Components in Music Signals. ISMIR 2008: 139-144 - 2007
- [j17]Hirokazu Kameoka, Takuya Nishimoto, Shigeki Sagayama:
A Multipitch Analyzer Based on Harmonic Temporal Structured Clustering. IEEE Trans. Speech Audio Process. 15(3): 982-994 (2007) - [j16]Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono
, Alain de Cheveigné
, Shigeki Sagayama:
Single and Multiple F0 Contour Estimation Through Parametric Spectrogram Modeling of Speech in Noisy Environments. IEEE Trans. Speech Audio Process. 15(4): 1135-1145 (2007) - [c89]Kenichi Miyamoto, Hirokazu Kameoka, Haruto Takeda, Takuya Nishimoto, Shigeki Sagayama:
Probabilistic Approach to Automatic Music Transcription from Audio Signals. ICASSP (2) 2007: 697-700 - [c88]Jonathan Le Roux, Hirokazu Kameoka, Nobutaka Ono
, Alain de Cheveigné, Shigeki Sagayama:
Harmonic-Temporal Clustering of Speech for Single and Multiple F0 Contour Estimation in Noisy Environments. ICASSP (4) 2007: 1053-1056 - [c87]Haruto Takeda, Takuya Nishimoto, Shigeki Sagayama:
Rhythm and Tempo Analysis Toward Automatic Music Transcription. ICASSP (4) 2007: 1317-1320 - [c86]