Sledovať
Seonghyeon Ye
Seonghyeon Ye
Overená e-mailová adresa na: kaist.ac.kr - Domovská stránka
Názov
Citované v
Citované v
Rok
Towards Continual Knowledge Learning of Language Models
J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo
ICLR 2022, 2022
1442022
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models
J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo
EMNLP 2022, 2022
862022
Flask: Fine-grained language model evaluation based on alignment skill sets
S Ye, D Kim, S Kim, H Hwang, S Kim, Y Jo, J Thorne, J Kim, M Seo
ICLR 2024, 2024
812024
The CoT Collection: Improving Zero-shot and Few-shot Learning of Language Models via Chain-of-Thought Fine-Tuning
S Kim, SJ Joo, D Kim, J Jang, S Ye, J Shin, M Seo
EMNLP 2023, 2023
812023
In-context instruction learning
S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo
AAAI 2024, 2024
71*2024
Can Large Language Models Truly Understand Prompts? A Case Study with Negated Prompts
J Jang, S Ye, M Seo
Transfer Learning for NLP Workshop @ NeurIPS 2022, 2022
712022
Exploring the benefits of training expert language models over instruction tuning
J Jang, S Kim, S Ye, D Kim, L Logeswaran, M Lee, K Lee, M Seo
ICML 2023, 2023
642023
Selfee: Iterative self-revising llm empowered by self-feedback generation
S Ye, Y Jo, D Kim, S Kim, H Hwang, M Seo
Blog post, 2023
572023
Dimensional Emotion Detection from Categorical Emotion
S Park, J Kim, S Ye, J Jeon, HY Park, A Oh
EMNLP 2021, 2021
532021
Guess the Instruction! Flipped Learning Makes Language Models Stronger Zero-Shot Learners
S Ye, D Kim, J Jang, J Shin, M Seo
ICLR 2023, 2023
36*2023
Consent in Crisis: The Rapid Decline of the AI Data Commons
S Longpre, R Mahari, A Lee, C Lund, H Oderinwale, W Brannon, ...
NeurIPS 2024, 2024
272024
Efficient Contrastive Learning via Novel Data Augmentation and Curriculum Learning
S Ye, J Kim, A Oh
EMNLP 2021, 2021
232021
How Do Large Language Models Acquire Factual Knowledge During Pretraining?
H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo
NeurIPS 2024, 2024
222024
Latent action pretraining from videos
S Ye, J Jang, B Jeon, S Joo, J Yang, B Peng, A Mandlekar, R Tan, ...
ICLR 2025, 2025
132025
Self-Explore: Enhancing Mathematical Reasoning in Language Models with Fine-grained Rewards
H Hwang, D Kim, S Kim, S Ye, M Seo
EMNLP 2024 Findings, 2024
13*2024
Instructir: A benchmark for instruction following of information retrieval models
H Oh, H Lee, S Ye, H Shin, H Jang, C Jun, M Seo
arXiv preprint arXiv:2402.14334, 2024
102024
Improving probability-based prompt selection through unified evaluation and analysis
S Yang, J Kim, J Jang, S Ye, H Lee, M Seo
TACL 2024, 2024
102024
Efficiently Enhancing Zero-Shot Performance of Instruction Following Model via Retrieval of Soft Prompt
S Ye, J Jang, D Kim, Y Jo, M Seo
EMNLP 2023 Findings, 2023
10*2023
Carpe Diem: On the Evaluation of World Knowledge in Lifelong Language Models
Y Kim, J Yoon, S Ye, SJ Hwang, S Yun
NAACL 2024, 2024
92024
Instruction Matters, a Simple yet Effective Task Selection Approach in Instruction Tuning for Specific Tasks
C Lee, J Han, S Ye, SJ Choi, H Lee, K Bae
EMNLP 2024, 2024
5*2024
Systém momentálne nemôže vykonať operáciu. Skúste to neskôr.
Články 1–20