Websrc: A dataset for web-based structural reading comprehension X Chen, Z Zhao, L Chen, D Zhang, J Ji, A Luo, Y Xiong, K Yu arXiv preprint arXiv:2101.09465, 2021 | 76 | 2021 |
Towards coherent image inpainting using denoising diffusion implicit models G Zhang, J Ji, Y Zhang, M Yu, TS Jaakkola, S Chang | 51 | 2023 |
Defending large language models against jailbreak attacks via semantic smoothing J Ji, B Hou, A Robey, GJ Pappas, H Hassani, Y Zhang, E Wong, S Chang arXiv preprint arXiv:2402.16192, 2024 | 31 | 2024 |
Advancing the Robustness of Large Language Models through Self-Denoised Smoothing J Ji, B Hou, Z Zhang, G Zhang, W Fan, Q Li, Y Zhang, G Liu, S Liu, ... arXiv preprint arXiv:2404.12274, 2024 | 27* | 2024 |
Improving diffusion models for scene text editing with dual encoders J Ji, G Zhang, Z Wang, B Hou, Z Zhang, B Price, S Chang arXiv preprint arXiv:2304.05568, 2023 | 25 | 2023 |
Reversing the Forget-Retain Objectives: An Efficient LLM Unlearning Framework from Logit Difference J Ji, Y Liu, Y Zhang, G Liu, RR Kompella, S Liu, S Chang arXiv preprint arXiv:2406.08607, 2024 | 7 | 2024 |
Dfm: Dialogue foundation model for universal large-scale dialogue-oriented task learning Z Chen, J Bao, L Chen, Y Liu, D Ma, B Chen, M Wu, S Zhu, X Dong, F Ge, ... arXiv preprint arXiv:2205.12662, 2022 | 7* | 2022 |
Controlling the Focus of Pretrained Language Generation Models J Ji, Y Kim, J Glass, T He arXiv preprint arXiv:2203.01146, 2022 | 4 | 2022 |
Augment before You Try: Knowledge-Enhanced Table Question Answering via Table Expansion Y Liu, J Ji, T Yu, R Rossi, S Kim, H Zhao, R Sinha, Y Zhang, S Chang arXiv preprint arXiv:2401.15555, 2024 | 2 | 2024 |