Folgen
Junho Kim
Junho Kim
Bestätigte E-Mail-Adresse bei korea.ac.kr
Titel
Zitiert von
Zitiert von
Jahr
Client-customized adaptation for parameter-efficient federated learning
Y Kim, J Kim, WL Mok, JH Park, SK Lee
Findings of the Association for Computational Linguistics: ACL 2023, 1159-1172, 2023
192023
Dynamic Structure Pruning for Compressing CNNs
JH Park, Y Kim, J Kim, JY Choi, SK Lee
In Proceedings of the AAAI Conference on Artificial Intelligence 2023, 2023
162023
Efficient Pre-training of Masked Language Model via Concept-based Curriculum Masking
M Lee, JH Park, J Kim, KM Kim, SK Lee
In Proceedings of the 2022 Conference on Empirical Methods in Natural …, 2022
132022
Smop: Towards efficient and effective prompt tuning with sparse mixture-of-prompts
JY Choi, J Kim, JH Park, WL Mok, SK Lee
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
112023
Learning from missing relations: Contrastive learning with commonsense knowledge graphs for commonsense inference
YH Jung, JH Park, JY Choi, M Lee, J Kim, KM Kim, SK Lee
Findings of the Association for Computational Linguistics: ACL 2022, 1514-1523, 2022
62022
Tutoring Helps Students Learn Better: Improving Knowledge Distillation for BERT with Tutor Network
J Kim, JH Park, M Lee, WL Mok, JY Choi, SK Lee
In Proceedings of the 2022 Conference on Empirical Methods in Natural …, 2022
52022
MELT: Materials-aware continued pre-training for language model adaptation to materials science
J Kim, Y Kim, JH Park, Y Oh, S Kim, SK Lee
arXiv preprint arXiv:2410.15126, 2024
22024
Towards robust and generalized parameter-efficient fine-tuning for noisy label learning
Y Kim, J Kim, SK Lee
Proceedings of the 62nd Annual Meeting of the Association for Computational …, 2024
22024
Coconut: Contextualized commonsense unified transformers for graph-based commonsense augmentation of language models
JH Park, M Lee, J Kim, SK Lee
Findings of the Association for Computational Linguistics ACL 2024, 5815-5830, 2024
12024
Continual debiasing: A bias mitigation framework for natural language understanding systems
M Lee, J Kim, JH Park, SK Lee
Expert Systems with Applications, 126593, 2025
2025
C2A: Client-Customized Adaptation for Parameter-Efficient Federated Learning
Y Kim, J Kim, WL Mok, JH Park, SK Lee
arXiv preprint arXiv:2411.00311, 2024
2024
CleaR: Towards Robust and Generalized Parameter-Efficient Fine-Tuning for Noisy Label Learning
Y Kim, J Kim, SK Lee
arXiv preprint arXiv:2411.00873, 2024
2024
Mentor-KD: Making Small Language Models Better Multi-step Reasoners
H Lee, J Kim, SK Lee
arXiv preprint arXiv:2410.09037, 2024
2024
Leap-of-Thought: Accelerating Transformers via Dynamic Token Routing
Y Kim, J Kim, JH Park, M Lee, SK Lee
Proceedings of the 2023 Conference on Empirical Methods in Natural Language …, 2023
2023
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–14