关注
Jang Kangwook
标题
引用次数
引用次数
年份
FitHuBERT: Going Thinner and Deeper for Knowledge Distillation of Speech Self-Supervised Learning
Y Lee*, K Jang*, J Goo, Y Jung, H Kim
ISCA Interspeech 2022, 3588-3592, 2022
532022
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
K Jang*, S Kim*, SY Yun, H Kim
ISCA Interspeech 2023, 316-320, 2023
42023
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition
S Kim*, K Jang*, S Bae, H Kim, SY Yun
IEEE SLT Workshop 2024, 457-464, 2024
22024
One-Class Learning with Adaptive Centroid Shift for Audio Deepfake Detection
HM Kim, K Jang, H Kim
ISCA Interspeech 2024, 4853-4857, 2024
22024
Improving Cross-Lingual Phonetic Representation of Low-Resource Languages Through Language Similarity Analysis
M Kim, K Jang, H Kim
IEEE ICASSP 2025, arXiv: 2501.06810, 2025
2025
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
K Jang, S Kim, H Kim
IEEE ICASSP 2024, 10721-10725, 2024
2024
系统目前无法执行此操作,请稍后再试。
文章 1–6