default search action
Yuma Koizumi
Person information
Refine list
refinements active!
zoomed in on ?? of ?? records
view refined list in
export refined list as
2020 – today
- 2024
- [i35]Min Ma, Yuma Koizumi, Shigeki Karita, Heiga Zen, Jason Riesa, Haruko Ishikawa, Michiel Bacchiani:
FLEURS-R: A Restored Multilingual Speech Corpus for Generation Tasks. CoRR abs/2408.06227 (2024) - 2023
- [c49]Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Michiel Bacchiani, Yu Zhang, Wei Han, Ankur Bapna:
LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus. INTERSPEECH 2023: 5496-5500 - [c48]Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Yu Zhang, Wei Han, Ankur Bapna, Michiel Bacchiani:
Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations. WASPAA 2023: 1-5 - [i34]Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Yu Zhang, Wei Han, Ankur Bapna, Michiel Bacchiani:
Miipher: A Robust Speech Restoration Model Integrating Self-Supervised Speech and Text Representations. CoRR abs/2303.01664 (2023) - [i33]Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Yohei Kawaguchi:
Description and Discussion on DCASE 2023 Challenge Task 2: First-Shot Unsupervised Anomalous Sound Detection for Machine Condition Monitoring. CoRR abs/2305.07828 (2023) - [i32]Yuma Koizumi, Heiga Zen, Shigeki Karita, Yifan Ding, Kohei Yatabe, Nobuyuki Morioka, Michiel Bacchiani, Yu Zhang, Wei Han, Ankur Bapna:
LibriTTS-R: A Restored Multi-Speaker Text-to-Speech Corpus. CoRR abs/2305.18802 (2023) - 2022
- [c47]Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Ryo Tanabe, Takashi Endo, Masaaki Yamamoto, Yohei Kawaguchi:
Description and Discussion on DCASE 2022 Challenge Task 2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring Applying Domain Generalization Techniques. DCASE 2022 - [c46]Yuma Koizumi, Heiga Zen, Kohei Yatabe, Nanxin Chen, Michiel Bacchiani:
SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping. INTERSPEECH 2022: 803-807 - [c45]Yuma Koizumi, Shigeki Karita, Arun Narayanan, Sankaran Panchapagesan, Michiel Bacchiani:
SNRi Target Training for Joint Speech Enhancement and Recognition. INTERSPEECH 2022: 1173-1177 - [c44]Arun Narayanan, James Walker, Sankaran Panchapagesan, Nathan Howard, Yuma Koizumi:
Learning Mask Scalars for Improved Robust Automatic Speech Recognition. SLT 2022: 317-323 - [c43]Yuma Koizumi, Kohei Yatabe, Heiga Zen, Michiel Bacchiani:
Wavefit: an Iterative and Non-Autoregressive Neural Vocoder Based on Fixed-Point Iteration. SLT 2022: 884-891 - [i31]Yuma Koizumi, Heiga Zen, Kohei Yatabe, Nanxin Chen, Michiel Bacchiani:
SpecGrad: Diffusion Probabilistic Model based Neural Vocoder with Adaptive Noise Spectral Shaping. CoRR abs/2203.16749 (2022) - [i30]Arun Narayanan, James Walker, Sankaran Panchapagesan, Nathan Howard, Yuma Koizumi:
Mask scalar prediction for improving robust automatic speech recognition. CoRR abs/2204.12092 (2022) - [i29]Kota Dohi, Keisuke Imoto, Noboru Harada, Daisuke Niizumi, Yuma Koizumi, Tomoya Nishida, Harsh Purohit, Takashi Endo, Masaaki Yamamoto, Yohei Kawaguchi:
Description and Discussion on DCASE 2022 Challenge Task 2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring Applying Domain Generalization Techniques. CoRR abs/2206.05876 (2022) - [i28]Yuma Koizumi, Kohei Yatabe, Heiga Zen, Michiel Bacchiani:
WaveFit: An Iterative and Non-autoregressive Neural Vocoder based on Fixed-Point Iteration. CoRR abs/2210.01029 (2022) - 2021
- [j4]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration: Trainable Iterative Phase Reconstruction Using Neural Network. IEEE J. Sel. Top. Signal Process. 15(1): 37-50 (2021) - [c42]Yohei Kawaguchi, Keisuke Imoto, Yuma Koizumi, Noboru Harada, Daisuke Niizumi, Kota Dohi, Ryo Tanabe, Harsh Purohit, Takashi Endo:
Description and Discussion on DCASE 2021 Challenge Task 2: Unsupervised Anomalous Detection for Machine Condition Monitoring Under Domain Shifted Conditions. DCASE 2021: 186-190 - [c41]Koichi Saito, Tomohiko Nakamura, Kohei Yatabe, Yuma Koizumi, Hiroshi Saruwatari:
Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method. EUSIPCO 2021: 321-325 - [c40]Takuya Fujimura, Yuma Koizumi, Kohei Yatabe, Ryoichi Miyazaki:
Noisy-target Training: A Training Strategy for DNN-based Speech Enhancement without Clean Speech. EUSIPCO 2021: 436-440 - [c39]Yuma Koizumi, Shigeki Karita, Scott Wisdom, Hakan Erdogan, John R. Hershey, Llion Jones, Michiel Bacchiani:
DF-Conformer: Integrated Architecture of Conv-Tasnet and Conformer Using Linear Complexity Self-Attention for Speech Enhancement. WASPAA 2021: 161-165 - [i27]Koichi Saito, Tomohiko Nakamura, Kohei Yatabe, Yuma Koizumi, Hiroshi Saruwatari:
Sampling-Frequency-Independent Audio Source Separation Using Convolution Layer Based on Impulse Invariant Method. CoRR abs/2105.04079 (2021) - [i26]Yohei Kawaguchi, Keisuke Imoto, Yuma Koizumi, Noboru Harada, Daisuke Niizumi, Kota Dohi, Ryo Tanabe, Harsh Purohit, Takashi Endo:
Description and Discussion on DCASE 2021 Challenge Task 2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring under Domain Shifted Conditions. CoRR abs/2106.04492 (2021) - [i25]Yuma Koizumi, Shigeki Karita, Scott Wisdom, Hakan Erdogan, John R. Hershey, Llion Jones, Michiel Bacchiani:
DF-Conformer: Integrated architecture of Conv-TasNet and Conformer using linear complexity self-attention for speech enhancement. CoRR abs/2106.15813 (2021) - [i24]Yuma Koizumi, Shigeki Karita, Arun Narayanan, Sankaran Panchapagesan, Michiel Bacchiani:
SNRi Target Training for Joint Speech Enhancement and Recognition. CoRR abs/2111.00764 (2021) - 2020
- [c38]Yuma Koizumi, Yohei Kawaguchi, Keisuke Imoto, Toshiki Nakamura, Yuki Nikaido, Ryo Tanabe, Harsh Purohit, Kaori Suefusa, Takashi Endo, Masahiro Yasuda, Noboru Harada:
Description and Discussion on DCASE2020 Challenge Task2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring. DCASE 2020: 81-85 - [c37]Daiki Takeuchi, Yuma Koizumi, Yasunori Ohishi, Noboru Harada, Kunio Kashino:
Effects of Word-Frequency Based Pre- and Post- Processings for Audio Captioning. DCASE 2020: 190-194 - [c36]Yuma Koizumi, Kohei Yatabe, Marc Delcroix, Yoshiki Masuyama, Daiki Takeuchi:
Speech Enhancement Using Self-Adaptation and Multi-Head Self-Attention. ICASSP 2020: 181-185 - [c35]Yuma Koizumi, Masahiro Yasuda, Shin Murata, Shoichiro Saito, Hisashi Uematsu, Noboru Harada:
SPIDERnet: Attention Network For One-Shot Anomaly Detection In Sounds. ICASSP 2020: 281-285 - [c34]Keisuke Imoto, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi, Yoichi Yamashita:
Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels. ICASSP 2020: 621-625 - [c33]Masahiro Yasuda, Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Keisuke Imoto:
Sound Event Localization Based on Sound Intensity Vector Refined by Dnn-Based Denoising and Source Separation. ICASSP 2020: 651-655 - [c32]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Phase Reconstruction Based On Recurrent Phase Unwrapping With Deep Neural Networks. ICASSP 2020: 826-830 - [c31]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Real-Time Speech Enhancement Using Equilibriated RNN. ICASSP 2020: 851-855 - [c30]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Invertible DNN-Based Nonlinear Time-Frequency Transform for Speech Enhancement. ICASSP 2020: 6644-6648 - [c29]Masaki Kawanaka, Yuma Koizumi, Ryoichi Miyazaki, Kohei Yatabe:
Stable Training of Dnn for Speech Enhancement Based on Perceptually-Motivated Black-Box Cost Function. ICASSP 2020: 7524-7528 - [c28]Tsubasa Ochiai, Marc Delcroix, Yuma Koizumi, Hiroaki Ito, Keisuke Kinoshita, Shoko Araki:
Listen to What You Want: Neural Network-Based Universal Sound Selector. INTERSPEECH 2020: 1441-1445 - [c27]Masahiro Yasuda, Yasunori Ohishi, Yuma Koizumi, Noboru Harada:
Crossmodal Sound Retrieval Based on Specific Target Co-Occurrence Denoted with Weak Labels. INTERSPEECH 2020: 1446-1450 - [c26]Yuma Koizumi, Ryo Masumura, Kyosuke Nishida, Masahiro Yasuda, Shoichiro Saito:
A Transformer-Based Audio Captioning Model with Keyword Estimation. INTERSPEECH 2020: 1977-1981 - [e1]Nobutaka Ono, Noboru Harada, Yohei Kawaguchi, Annamaria Mesaros, Keisuke Imoto, Yuma Koizumi, Tatsuya Komatsu:
Proceedings of 5th the Workshop on Detection and Classification of Acoustic Scenes and Events 2020 (DCASE 2020), Tokyo, Japan (full virtual), November 2-4, 2020. 2020, ISBN 978-4-600-00566-5 [contents] - [i23]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Phase reconstruction based on recurrent phase unwrapping with deep neural networks. CoRR abs/2002.05832 (2020) - [i22]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Real-time speech enhancement using equilibriated RNN. CoRR abs/2002.05843 (2020) - [i21]Keisuke Imoto, Noriyuki Tonami, Yuma Koizumi, Masahiro Yasuda, Ryosuke Yamanishi, Yoichi Yamashita:
Sound Event Detection by Multitask Learning of Sound Events and Scenes with Soft Scene Labels. CoRR abs/2002.05848 (2020) - [i20]Yuma Koizumi, Kohei Yatabe, Marc Delcroix, Yoshiki Masuyama, Daiki Takeuchi:
Speech Enhancement using Self-Adaptation and Multi-Head Self-Attention. CoRR abs/2002.05873 (2020) - [i19]Masaki Kawanaka, Yuma Koizumi, Ryoichi Miyazaki, Kohei Yatabe:
Stable Training of DNN for Speech Enhancement based on Perceptually-Motivated Black-Box Cost Function. CoRR abs/2002.05879 (2020) - [i18]Masahiro Yasuda, Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Keisuke Imoto:
Sound Event Localization based on Sound Intensity Vector Refined By DNN-Based Denoising and Source Separation. CoRR abs/2002.05994 (2020) - [i17]Tsubasa Ochiai, Marc Delcroix, Yuma Koizumi, Hiroaki Ito, Keisuke Kinoshita, Shoko Araki:
Listen to What You Want: Neural Network-based Universal Sound Selector. CoRR abs/2006.05712 (2020) - [i16]Yuma Koizumi, Yohei Kawaguchi, Keisuke Imoto, Toshiki Nakamura, Yuki Nikaido, Ryo Tanabe, Harsh Purohit, Kaori Suefusa, Takashi Endo, Masahiro Yasuda, Noboru Harada:
Description and Discussion on DCASE2020 Challenge Task2: Unsupervised Anomalous Sound Detection for Machine Condition Monitoring. CoRR abs/2006.05822 (2020) - [i15]Yuma Koizumi, Ryo Masumura, Kyosuke Nishida, Masahiro Yasuda, Shoichiro Saito:
A Transformer-based Audio Captioning Model with Keyword Estimation. CoRR abs/2007.00222 (2020) - [i14]Yuma Koizumi, Daiki Takeuchi, Yasunori Ohishi, Noboru Harada, Kunio Kashino:
The NTT DCASE2020 Challenge Task 6 system: Automated Audio Captioning with Keywords and Sentence Length Estimation. CoRR abs/2007.00225 (2020) - [i13]Daiki Takeuchi, Yuma Koizumi, Yasunori Ohishi, Noboru Harada, Kunio Kashino:
Effects of Word-frequency based Pre- and Post- Processings for Audio Captioning. CoRR abs/2009.11436 (2020) - [i12]Yuma Koizumi, Yasunori Ohishi, Daisuke Niizumi, Daiki Takeuchi, Masahiro Yasuda:
Audio Captioning using Pre-Trained Large-Scale Language Model Guided by Audio-based Similar Caption Retrieval. CoRR abs/2012.07331 (2020)
2010 – 2019
- 2019
- [j3]Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Yuta Kawachi, Noboru Harada:
Unsupervised Detection of Anomalous Sound Based on Deep Learning and the Neyman-Pearson Lemma. IEEE ACM Trans. Audio Speech Lang. Process. 27(1): 212-224 (2019) - [c25]Luca Mazzon, Yuma Koizumi, Masahiro Yasuda, Noboru Harada:
First Order Ambisonics Domain Spatial Augmentation for DNN-based Direction of Arrival Estimation. DCASE 2019: 154-158 - [c24]Ryo Masumura, Kiyoaki Matsui, Yuma Koizumi, Takaaki Fukutomi, Takanobu Oba, Yushi Aono:
Context-Aware Neural Voice Activity Detection Using Auxiliary Networks for Phoneme Recognition, Speech Enhancement and Acoustic Scene Classification. EUSIPCO 2019: 1-5 - [c23]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration. ICASSP 2019: 61-65 - [c22]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Data-driven Design of Perfect Reconstruction Filterbank for DNN-based Sound Source Enhancement. ICASSP 2019: 596-600 - [c21]Yuma Koizumi, Noboru Harada, Yoichi Haneda:
Trainable Adaptive Window Switching for Speech Enhancement. ICASSP 2019: 616-620 - [c20]Yuma Koizumi, Shin Murata, Noboru Harada, Shoichiro Saito, Hisashi Uematsu:
SNIPER: Few-shot Learning for Anomaly Detection to Minimize False-negative Rate with Ensured True-positive Rate. ICASSP 2019: 915-919 - [c19]Yuta Kawachi, Yuma Koizumi, Shin Murata, Noboru Harada:
A Two-class Hyper-spherical Autoencoder for Supervised Anomaly Detection. ICASSP 2019: 3047-3051 - [c18]Masataka Yamaguchi, Yuma Koizumi, Noboru Harada:
AdaFlow: Domain-adaptive Density Estimator with Application to Anomaly Detection and Unpaired Cross-domain Translation. ICASSP 2019: 3647-3651 - [c17]Shin Murata, Yuma Koizumi, Noboru Harada:
Finding Low-Dimensional Dynamical Structure Through Variational Auto-Encoding Dynamic Mode Decomposition. MLSP 2019: 1-6 - [c16]Yuma Koizumi, Shoichiro Saito, Masataka Yamaguchi, Shin Murata, Noboru Harada:
Batch Uniformization for Minimizing Maximum Anomaly Score of Dnn-Based Anomaly Detection in Sounds. WASPAA 2019: 6-10 - [c15]Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Noboru Harada, Keisuke Imoto:
ToyADMOS: A Dataset of Miniature-Machine Operating Sounds for Anomalous Sound Detection. WASPAA 2019: 313-317 - [i11]Yoshiki Masuyama, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Deep Griffin-Lim Iteration. CoRR abs/1903.03971 (2019) - [i10]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Data-driven design of perfect reconstruction filterbank for DNN-based sound source enhancement. CoRR abs/1903.08876 (2019) - [i9]Yuma Koizumi, Shoichiro Saito, Masataka Yamaguchi, Shin Murata, Noboru Harada:
Batch Uniformization for Minimizing Maximum Anomaly Score of DNN-based Anomaly Detection in Sounds. CoRR abs/1907.08338 (2019) - [i8]Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Noboru Harada, Keisuke Imoto:
ToyADMOS: A Dataset of Miniature-Machine Operating Sounds for Anomalous Sound Detection. CoRR abs/1908.03299 (2019) - [i7]Luca Mazzon, Yuma Koizumi, Masahiro Yasuda, Noboru Harada:
First Order Ambisonics Domain Spatial Augmentation for DNN-based Direction of Arrival Estimation. CoRR abs/1910.04388 (2019) - [i6]Masahiro Yasuda, Yuma Koizumi, Luca Mazzon, Shoichiro Saito, Hisashi Uematsu:
DOA Estimation by DNN-based Denoising and Dereverberation from Sound Intensity Vector. CoRR abs/1910.04415 (2019) - [i5]Daiki Takeuchi, Kohei Yatabe, Yuma Koizumi, Yasuhiro Oikawa, Noboru Harada:
Invertible DNN-based nonlinear time-frequency transform for speech enhancement. CoRR abs/1911.10764 (2019) - 2018
- [j2]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Yoichi Haneda:
DNN-Based Source Enhancement to Increase Objective Sound Quality Assessment Score. IEEE ACM Trans. Audio Speech Lang. Process. 26(10): 1780-1792 (2018) - [c14]Yuma Koizumi, Shoichiro Saito, Suehiro Shimauchi, Kazunori Kobayashi, Noboru Harada:
Distant Noise Reduction Based on Multi-delay Noise Model Using Distributed Microphone Array. EUSIPCO 2018: 1-5 - [c13]Yuma Koizumi, Noboru Harada, Yoichi Haneda, Yusuke Hioka, Kazunori Kobayashi:
End-to-End Sound Source Enhancement Using Deep Neural Network in the Modified Discrete Cosine Transform Domain. ICASSP 2018: 706-710 - [c12]Yuta Kawachi, Yuma Koizumi, Noboru Harada:
Complementary Set Variational Autoencoder for Supervised Anomaly Detection. ICASSP 2018: 2366-2370 - [c11]Sota Nishiguchi, Yuma Koizumi, Noboru Harada, Katunobu Itou:
DNN-Based Near- and Far-Field Source Separation Using Spherical-Harmonic-Analysis-Based Acoustic Features. IWAENC 2018: 510-514 - [i4]Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Yuta Kawachi, Noboru Harada:
Unsupervised Detection of Anomalous Sound based on Deep Learning and the Neyman-Pearson Lemma. CoRR abs/1810.09133 (2018) - [i3]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Yoichi Haneda:
DNN-based Source Enhancement to Increase Objective Sound Quality Assessment Score. CoRR abs/1810.09137 (2018) - [i2]Yuma Koizumi, Noboru Harada, Yoichi Haneda:
Trainable Adaptive Window Switching for Speech Enhancement. CoRR abs/1811.02438 (2018) - [i1]Masataka Yamaguchi, Yuma Koizumi, Noboru Harada:
AdaFlow: Domain-Adaptive Density Estimator with Application to Anomaly Detection and Unpaired Cross-Domain Translation. CoRR abs/1812.05796 (2018) - 2017
- [j1]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Hitoshi Ohmuro:
Informative Acoustic Feature Selection to Maximize Mutual Information for Collecting Target Sources. IEEE ACM Trans. Audio Speech Lang. Process. 25(4): 768-779 (2017) - [c10]Yuma Koizumi, Shoichiro Saito, Hisashi Uematsu, Noboru Harada:
Optimizing acoustic feature extractor for anomalous sound detection based on Neyman-Pearson lemma. EUSIPCO 2017: 698-702 - [c9]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Yoichi Haneda:
DNN-based source enhancement self-optimized by reinforcement learning using sound quality measurements. ICASSP 2017: 81-85 - [c8]Kenta Niwa, Yuma Koizumi, Tomoko Kawase, Kazunori Kobayashi, Yusuke Hioka:
Supervised source enhancement composed of nonnegative auto-encoders and complementarity subtraction. ICASSP 2017: 266-270 - [c7]Suehiro Shimauchi, Shinya Kudo, Yuma Koizumi, Ken'ichi Furuya:
On relationships between amplitude and phase of short-time Fourier transform. ICASSP 2017: 676-680 - 2016
- [c6]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Hitoshi Ohmuro:
Integrated approach of feature extraction and sound source enhancement based on maximization of mutual information. ICASSP 2016: 186-190 - [c5]Kenta Niwa, Yuma Koizumi, Tomoko Kawase, Kazunori Kobayashi, Yusuke Hioka:
Pinpoint extraction of distant sound source based on DNN mapping from multiple beamforming outputs to prior SNR. ICASSP 2016: 435-439 - [c4]Kenta Niwa, Yuma Koizumi, Kazunori Kobayashi, Hisashi Uematsu:
Binaural sound generation corresponding to omnidirectional video view using angular region-wise source enhancement. ICASSP 2016: 2852-2856 - 2015
- [c3]Yuma Koizumi, Yuki Ijichi, Hisaya Tanaka, Ayumi Otera, Kayoko Takahashi, Michinari Fukuda, Noriyoshi Asai:
Effective approach to character input for novice BCI users. APSITT 2015: 1-3 - [c2]Yuma Koizumi, Kenta Niwa, Yusuke Hioka, Kazunori Kobayashi, Hitoshi Ohmuro:
Informative acoustic feature selection on microphone array wiener filtering for collecting target source on sports ground. WASPAA 2015: 1-5 - 2014
- [c1]Yuma Koizumi, Katunobu Itou:
Intra-note segmentation via sticky HMM with DP emission. ICASSP 2014: 2144-2148
Coauthor Index
manage site settings
To protect your privacy, all features that rely on external API calls from your browser are turned off by default. You need to opt-in for them to become active. All settings here will be stored as cookies with your web browser. For more information see our F.A.Q.
Unpaywalled article links
Add open access links from to the list of external document links (if available).
Privacy notice: By enabling the option above, your browser will contact the API of unpaywall.org to load hyperlinks to open access articles. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Unpaywall privacy policy.
Archived links via Wayback Machine
For web page which are no longer available, try to retrieve content from the of the Internet Archive (if available).
Privacy notice: By enabling the option above, your browser will contact the API of archive.org to check for archived content of web pages that are no longer available. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Internet Archive privacy policy.
Reference lists
Add a list of references from , , and to record detail pages.
load references from crossref.org and opencitations.net
Privacy notice: By enabling the option above, your browser will contact the APIs of crossref.org, opencitations.net, and semanticscholar.org to load article reference information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the Crossref privacy policy and the OpenCitations privacy policy, as well as the AI2 Privacy Policy covering Semantic Scholar.
Citation data
Add a list of citing articles from and to record detail pages.
load citations from opencitations.net
Privacy notice: By enabling the option above, your browser will contact the API of opencitations.net and semanticscholar.org to load citation information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the OpenCitations privacy policy as well as the AI2 Privacy Policy covering Semantic Scholar.
OpenAlex data
Load additional information about publications from .
Privacy notice: By enabling the option above, your browser will contact the API of openalex.org to load additional information. Although we do not have any reason to believe that your call will be tracked, we do not have any control over how the remote server uses your data. So please proceed with care and consider checking the information given by OpenAlex.
last updated on 2024-10-07 21:26 CEST by the dblp team
all metadata released as open data under CC0 1.0 license
see also: Terms of Use | Privacy Policy | Imprint