FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning Y Lee*, K Jang*, J Goo, Y Jung, H Kim ISCA Interspeech 2022, 3588-3592, 2022 | 53 | 2022 |
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation K Jang*, S Kim*, SY Yun, H Kim ISCA Interspeech 2023, 316-320, 2023 | 4 | 2023 |
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition S Kim*, K Jang*, S Bae, H Kim, SY Yun IEEE SLT Workshop 2024, 457-464, 2024 | 2 | 2024 |
One-Class Learning with Adaptive Centroid Shift for Audio Deepfake Detection HM Kim, K Jang, H Kim ISCA Interspeech 2024, 4853-4857, 2024 | 2 | 2024 |
Improving Cross-Lingual Phonetic Representation of Low-Resource Languages Through Language Similarity Analysis M Kim, K Jang, H Kim IEEE ICASSP 2025, arXiv: 2501.06810, 2025 | | 2025 |
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models K Jang, S Kim, H Kim IEEE ICASSP 2024, 10721-10725, 2024 | | 2024 |