Follow
Masanari Ohi
Title
Cited by
Cited by
Year
Continual Pre-Training for Cross-Lingual LLM Adaptation: Enhancing Japanese Language Capabilities
K Fujii, T Nakamura, M Loem, H Iida, M Ohi, K Hattori, H Shota, S Mizuki, ...
arXiv preprint arXiv:2404.17790, 2024
312024
Building a Large Japanese Web Corpus for Large Language Models
N Okazaki, K Hattori, H Shota, H Iida, M Ohi, K Fujii, T Nakamura, M Loem, ...
arXiv preprint arXiv:2404.17733, 2024
62024
Likelihood-based Mitigation of Evaluation Bias in Large Language Models
M Ohi, M Kaneko, R Koike, M Loem, N Okazaki
arXiv preprint arXiv:2402.15987, 2024
52024
HALL-E: hierarchical neural codec language model for minute-long zero-shot text-to-speech synthesis
Y Nishimura, T Hirose, M Ohi, H Nakayama, N Inoue
arXiv preprint arXiv:2410.04380, 2024
22024
ELP-Adapters: Parameter Efficient Adapter Tuning for Various Speech Processing Tasks
N Inoue, S Otake, T Hirose, M Ohi, R Kawakami
IEEE/ACM Transactions on Audio, Speech, and Language Processing, 2024
12024
HarmonicEval: Multi-modal, Multi-task, Multi-criteria Automatic Evaluation Using a Vision Language Model
M Ohi, M Kaneko, N Okazaki, N Inoue
arXiv preprint arXiv:2412.14613, 2024
2024
Why We Build Local Large Language Models: An Observational Analysis from 35 Japanese and Multilingual LLMs
K Saito, S Mizuki, M Ohi, T Nakamura, T Shiotani, K Maeda, Y Ma, ...
arXiv preprint arXiv:2412.14471, 2024
2024
The system can't perform the operation now. Try again later.
Articles 1–7