Seguir
Masato Mimura
Masato Mimura
NTT corporation
Dirección de correo verificada de sap.ist.i.kyoto-u.ac.jp
Título
Citado por
Citado por
Año
Statistical speech enhancement based on probabilistic integration of variational autoencoder and non-negative matrix factorization
Y Bando, M Mimura, K Itoyama, K Yoshii, T Kawahara
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
1552018
Acoustic-to-word attention-based model complemented with character-level CTC-based model
S Ueno, H Inaguma, M Mimura, T Kawahara
2018 IEEE International Conference on Acoustics, Speech and Signal …, 2018
782018
Leveraging sequence-to-sequence speech synthesis for enhancing acoustic-to-word speech recognition
M Mimura, S Ueno, H Inaguma, S Sakai, T Kawahara
2018 IEEE Spoken Language Technology Workshop (SLT), 477-484, 2018
742018
Unsupervised speech enhancement based on multichannel NMF-informed beamforming for noise-robust automatic speech recognition
K Shimada, Y Bando, M Mimura, K Itoyama, K Yoshii, T Kawahara
IEEE/ACM Transactions on Audio, Speech, and Language Processing 27 (5), 960-971, 2019
662019
Distilling the knowledge of BERT for sequence-to-sequence ASR
H Futami, H Inaguma, S Ueno, M Mimura, S Sakai, T Kawahara
arXiv preprint arXiv:2008.03822, 2020
632020
Uyghur morpheme-based language models and ASR
M Ablimit, G Neubig, M Mimura, S Mori, T Kawahara, A Hamdulla
IEEE 10th International Conference on Signal Processing Proceedings, 581-584, 2010
482010
Bayesian learning of a language model from continuous speech
G Neubig, M Mimura, S Mori, T Kawahara
IEICE TRANSACTIONS on Information and Systems 95 (2), 614-625, 2012
472012
Learning a language model from continuous speech
G Neubig, M Mimura, S Mori, T Kawahara
Eleventh Annual Conference of the International Speech Communication Association, 2010
462010
Multi-speaker sequence-to-sequence speech synthesis for data augmentation in acoustic-to-word speech recognition
S Ueno, M Mimura, S Sakai, T Kawahara
ICASSP 2019-2019 IEEE International Conference on Acoustics, Speech and …, 2019
432019
Enhancing monotonic multihead attention for streaming asr
H Inaguma, M Mimura, T Kawahara
arXiv preprint arXiv:2005.09394, 2020
412020
Continuous speech recognition consortium: an open repository for CSR tools and models
A Lee, T Kawahara, K Takeda, M Mimura, A Yamada, A Ito, K Itou, ...
352002
Cross-domain speech recognition using nonparallel corpora with cycle-consistent adversarial networks
M Mimura, S Sakai, T Kawahara
2017 IEEE automatic speech recognition and understanding workshop (ASRU …, 2017
342017
Joint optimization of denoising autoencoder and dnn acoustic model based on multi-target learning for noisy speech recognition
M Mimura, S Sakai, T Kawahara
Proc. Interspeech 2016, 3803-3807, 2016
302016
Speech dereverberation using long short-term memory
M Mimura, S Sakai, T Kawahara
Sixteenth Annual Conference of the International Speech Communication …, 2015
292015
Automatic transcription system for meetings of the Japanese national congress
Y Akita, M Mimura, T Kawahara
Proc. InterSpeech 2009, 84-87, 2009
282009
Data augmentation for asr using tts via a discrete representation
S Ueno, M Mimura, S Sakai, T Kawahara
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU), 68-75, 2021
272021
Reverberant speech recognition combining deep neural networks and deep autoencoders augmented with a phone-class feature
M Mimura, S Sakai, T Kawahara
EURASIP journal on Advances in Signal Processing 2015, 1-13, 2015
252015
Asr rescoring and confidence estimation with electra
H Futami, H Inaguma, M Mimura, S Sakai, T Kawahara
2021 IEEE Automatic Speech Recognition and Understanding Workshop (ASRU …, 2021
242021
Semi-supervised ensemble DNN acoustic model training
S Li, X Lu, S Sakai, M Mimura, T Kawahara
2017 IEEE International Conference on Acoustics, Speech and Signal …, 2017
222017
Reverberant speech recognition combining deep neural networks and deep autoencoders
M Mimura, S Sakai, T Kawahara
IEEE REVERB Workshop, 2014
222014
El sistema no puede realizar la operación en estos momentos. Inténtalo de nuevo más tarde.
Artículos 1–20