ติดตาม
Sangmin Bae
Sangmin Bae
PhD Student at KAIST AI
ยืนยันอีเมลแล้วที่ kaist.ac.kr - หน้าแรก
ชื่อ
อ้างโดย
อ้างโดย
ปี
Preservation of the Global Knowledge by Not-True Distillation in Federated Learning
G Lee*, M Jeong*, Y Shin, S Bae, SY Yun
NeurIPS 2022 (arXiv preprint arXiv:2106.03097), 2021
1332021
Mixco: Mix-up contrastive learning for visual representation
S Kim*, G Lee*, S Bae*, SY Yun
NeurIPS Workshop 2020 (arXiv preprint arXiv:2010.06300), 2020
972020
Fast and Robust Early-Exiting Framework for Autoregressive Language Models with Synchronized Parallel Decoding
S Bae*, J Ko*, H Song, SY Yun
EMNLP 2023 (arXiv preprint arXiv:2310.05424), 2023
462023
Patch-Mix Contrastive Learning with Audio Spectrogram Transformer on Respiratory Sound Classification
S Bae*, JW Kim*, WY Cho, H Baek, S Son, B Lee, C Ha, K Tae, S Kim, ...
INTERSPEECH 2023 (arXiv preprint arXiv:2305.14032), 2023
32*2023
Re-thinking Federated Active Learning based on Inter-class Diversity
SM Kim*, S Bae*, H Song, SY Yun
CVPR 2023 (arXiv preprint arXiv:2303.12317), 2023
202023
Stethoscope-guided Supervised Contrastive Learning for Cross-domain Adaptation on Respiratory Sound Classification
JW Kim, S Bae, WY Cho, B Lee, HY Jung
ICASSP 2024 (arXiv preprint arXiv:2312.09603), 2023
132023
Self-Contrastive Learning: Single-viewed Supervised Contrastive Framework using Sub-network
S Bae*, S Kim*, J Ko, G Lee, S Noh, SY Yun
AAAI 2023 (arXiv preprint arXiv:2106.15499), 2021
13*2021
Accurate and fast federated learning via combinatorial multi-armed bandits
T Kim*, S Bae*, J Lee, S Yun
arXiv preprint arXiv:2012.03270, 2020
132020
Coreset Sampling from Open-Set for Fine-Grained Self-Supervised Learning
S Kim*, S Bae*, SY Yun
CVPR 2023 (arXiv preprint arXiv:2303.11101), 2023
112023
Block Transformer: Global-to-Local Language Modeling for Fast Inference
N Ho*, S Bae*, T Kim, H Jo, Y Kim, T Schuster, A Fisch, J Thorne, SY Yun
NeurIPS 2024 (arXiv preprint arXiv:2406.02657), 2024
92024
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, S Bae, N Ho, SJ Hwang, S Yun
NAACL 2024 (https://arxiv.org/abs/2311.08106), 2023
92023
Adversarial Fine-tuning using Generated Respiratory Sound to Address Class Imbalance
JW Kim, C Yoon, M Toikkanen, S Bae, HY Jung
NeurIPS Workshop 2023 (arXiv preprint arXiv:2311.06480), 2023
62023
VACoDe: Visual Augmented Contrastive Decoding
S Kim, B Cho, S Bae, S Ahn, SY Yun
ICML Workshop 2024 (arXiv preprint arXiv:2408.05337), 2024
52024
Why In-Context Learning Transformers are Tabular Data Classifiers
F Breejen, S Bae, S Cha, SY Yun
NeurIPS Workshop 2023 (arXiv preprint arXiv:2405.13396), 2024
5*2024
RepAugment: Input-Agnostic Representation-Level Augmentation for Respiratory Sound Classification
JW Kim, M Toikkanen, S Bae, M Kim, HY Jung
EMBC 2024 (arXiv preprint arXiv:2405.02996), 2024
52024
Relaxed recursive transformers: Effective parameter sharing with layer-wise lora
S Bae, A Fisch, H Harutyunyan, Z Ji, S Kim, T Schuster
ICLR 2025 (arXiv preprint arXiv:2410.20672), 2024
42024
Learning Video Temporal Dynamics with Cross-Modal Attention for Robust Audio-Visual Speech Recognition
S Kim*, K Jang*, S Bae, H Kim, SY Yun
SLT 2024 (arXiv preprint arXiv:2407.03563), 2024
32024
Hard Prompts Made Interpretable: Sparse Entropy Regularization for Prompt Tuning with RL
Y Choi, S Bae, S Ban, M Jeong, C Zhang, L Song, L Zhao, J Bian, KE Kim
ACL 2024 (arXiv preprint arXiv:2407.14733), 2024
22024
SIPA: A simple framework for efficient networks
G Lee*, S Bae*, J Oh, SY Yun
ICDM Workshop 2020 (arXiv preprint arXiv:2004.14476), 2020
12020
MoHAVE: Mixture of Hierarchical Audio-Visual Experts for Robust Speech Recognition
S Kim, K Jang, S Bae, S Cho, SY Yun
arXiv preprint arXiv:2502.10447, 2025
2025
ระบบไม่สามารถดำเนินการได้ในขณะนี้ โปรดลองใหม่อีกครั้งในภายหลัง
บทความ 1–20