Efficient Dialogue State Tracking by Selectively Overwriting Memory S Kim, S Yang, G Kim, SW Lee ACL 2020, 2020 | 205 | 2020 |
Knowledge Unlearning for Mitigating Privacy Risks in Language Models J Jang, D Yoon, S Yang, S Cha, M Lee, L Logeswaran, M Seo ACL 2023, 2022 | 153 | 2022 |
Towards Continual Knowledge Learning of Language Models J Jang, S Ye, S Yang, J Shin, J Han, G Kim, SJ Choi, M Seo ICLR 2022, 2021 | 142 | 2021 |
Spatial Dependency Parsing for Semi-Structured Document Information Extraction W Hwang, J Yim, S Park, S Yang, M Seo Findings of ACL 2021, 2020 | 115 | 2020 |
TemporalWiki: A Lifelong Benchmark for Training and Evaluating Ever-Evolving Language Models J Jang, S Ye, C Lee, S Yang, J Shin, J Han, G Kim, M Seo EMNLP 2022, 2022 | 90 | 2022 |
NeurIPS 2020 EfficientQA competition: Systems, analyses and lessons learned S Min, J Boyd-Graber, C Alberti, D Chen, E Choi, M Collins, K Guu, ... Proceedings of Machine Learning Research (PMLR) 133, 86-111, 2021 | 78 | 2021 |
Investigating the Effectiveness of Task-Agnostic Prefix Prompt for Instruction Following S Ye, H Hwang, S Yang, H Yun, Y Kim, M Seo AAAI 2024, 2023 | 65* | 2023 |
Do Large Language Models Latently Perform Multi-Hop Reasoning? S Yang, E Gribovskaya, N Kassner, M Geva, S Riedel ACL 2024, 2024 | 45 | 2024 |
Is Retriever Merely an Approximator of Reader? S Yang, M Seo Spa-NLP Workshop at ACL 2022, 2020 | 38 | 2020 |
ClovaCall: Korean Goal-Oriented Dialog Speech Corpus for Automatic Speech Recognition of Contact Centers JW Ha, K Nam, JG Kang, SW Lee, S Yang, H Jung, E Kim, H Kim, S Kim, ... INTERSPEECH 2020, 2020 | 34 | 2020 |
Generative Multi-hop Retrieval H Lee, S Yang, H Oh, M Seo EMNLP 2022, 2022 | 30* | 2022 |
Nonparametric Decoding for Generative Retrieval H Lee, J Kim, H Chang, H Oh, S Yang, V Karpukhin, Y Lu, M Seo Findings of ACL 2023, 2023 | 26* | 2023 |
Large-Scale Answerer in Questioner's Mind for Visual Dialog Question Generation SW Lee, T Gao, S Yang, J Yoo, JW Ha ICLR 2019, 2019 | 20 | 2019 |
How Do Large Language Models Acquire Factual Knowledge During Pretraining? H Chang, J Park, S Ye, S Yang, Y Seo, DS Chang, M Seo NeurIPS 2024, 2024 | 19 | 2024 |
Hopping Too Late: Exploring the Limitations of Large Language Models on Multi-Hop Queries E Biran, D Gottesman, S Yang, M Geva, A Globerson EMNLP 2024, 2024 | 9 | 2024 |
Improving Probability-based Prompt Selection Through Unified Evaluation and Analysis S Yang, J Kim, J Jang, S Ye, H Lee, M Seo Transactions of the Association for Computational Linguistics (TACL) 12, 758-774, 2024 | 9 | 2024 |
Designing a Minimal Retrieve-and-Read System for Open-Domain Question Answering S Yang, M Seo NAACL 2021, 2021 | 8 | 2021 |
T-commerce sale prediction using deep learning and statistical model I Kim, K Na, S Yang, J Jang, Y Kim, W Shin, D Kim Journal of KIISE 44 (8), 803-812, 2017 | 8* | 2017 |
Exploring the Practicality of Generative Retrieval on Dynamic Corpora S Yoon, C Kim, H Lee, J Jang, S Yang, M Seo EMNLP 2024, 2023 | 6* | 2023 |
Do Large Language Models Perform Latent Multi-Hop Reasoning without Exploiting Shortcuts? S Yang, N Kassner, E Gribovskaya, S Riedel, M Geva arXiv preprint arXiv:2411.16679, 2024 | | 2024 |