ติดตาม
Sungnyun Kim
Sungnyun Kim
ยืนยันอีเมลแล้วที่ kaist.ac.kr - หน้าแรก
ชื่อ
อ้างโดย
อ้างโดย
ปี
MixCo: Mix-up Contrastive Learning for Visual Representation
S Kim*, G Lee*, S Bae*, SY Yun
NeurIPS Workshop on Self-Supervised Learning: Theory and Practice, 2020
972020
Understanding Cross-Domain Few-Shot Learning Based on Domain Similarity and Few-Shot Difficulty
J Oh*, S Kim*, N Ho*, JH Kim, H Song, SY Yun
Advances in Neural Information Processing Systems 35, 2622-2636, 2022
422022
DistiLLM: Towards Streamlined Distillation for Large Language Models
J Ko, S Kim, T Chen, SY Yun
Proceedings of the 41st International Conference on Machine Learning, 2024
412024
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
S Bae, JW Kim, WY Cho, H Baek, S Son, B Lee, C Ha, K Tae, S Kim*, ...
Proceedings of Interspeech, 5436-5440, 2023
32*2023
DiffBlender: Scalable and Composable Multimodal Text-to-Image Diffusion Models
S Kim, J Lee, K Hong, D Kim, N Ahn
arXiv preprint arXiv:2305.15194, 2023
152023
Self-Contrastive Learning: Single-Viewed Supervised Contrastive Framework Using Sub-network
S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun
Proceedings of the AAAI Conference on Artificial Intelligence 37 (1), 197-205, 2023
13*2023
ReFine: Re-randomization before Fine-tuning for Cross-domain Few-shot Learning
J Oh*, S Kim*, N Ho*, JH Kim, H Song, SY Yun
Proceedings of the 31st ACM International Conference on Information …, 2022
132022
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
S Kim*, S Bae*, SY Yun
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2023
112023
How to Fine-tune Models with Few Samples: Update, Data Augmentation, and Test-time Augmentation
Y Kim*, J Oh*, S Kim, SY Yun
ICML Workshop on Updatable Machine Learning, 2022
62022
Recycle-and-Distill: Universal Compression Strategy for Transformer-based Speech SSL Models with Attention Map Reusing and Masking Distillation
K Jang*, S Kim*, SY Yun, H Kim
Proceedings of Interspeech, 316-320, 2023
52023
Calibration of Few-Shot Classification Tasks: Mitigating Misconfidence From Distribution Mismatch
S Kim, SY Yun
IEEE Access 10, 53894-53908, 2022
5*2022
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition
S Kim*, K Jang*, S Bae, H Kim, SY Yun
IEEE Spoken Language Technology Workshop, 457-464, 2024
32024
FedDr+: Stabilizing Dot-regression with Global Feature Distillation for Federated Learning
S Kim, M Jeong, S Kim, S Cho, S Ahn, SY Yun
KDD Workshop on Federated Learning for Data Mining and Graph Analytics (FedKDD), 2024
12024
STaR: Distilling Speech Temporal Relation for Lightweight Speech Self-Supervised Learning Models
K Jang, S Kim, H Kim
IEEE International Conference on Acoustics, Speech and Signal Processing …, 2024
12024
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition
S Kim, K Jang, S Bae, S Cho, SY Yun
arXiv preprint arXiv:2502.10447, 2025
2025
DocKD: Knowledge Distillation from LLMs for Open-World Document Understanding Models
S Kim*, H Liao*, S Appalaraju, P Tang, Z Tu, RK Satzoda, R Manmatha, ...
Proceedings of the 2024 Conference on Empirical Methods in Natural Language …, 2024
2024
Diffusion-based Episodes Augmentation for Offline Multi-Agent Reinforcement Learning
J Oh, S Kim, G Kim, SH Kim, SY Yun
ICML 2024 Workshop on Structured Probabilistic Inference & Generative Modeling, 2024
2024
Real-time and Explainable Detection of Epidemics with Global News Data
S Kim*, J Shin*, S Eom, J Oh, SY Yun
Workshop on Healthcare AI and COVID-19, 73-90, 2022
2022
ระบบไม่สามารถดำเนินการได้ในขณะนี้ โปรดลองใหม่อีกครั้งในภายหลัง
บทความ 1–18