From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao NAACL 2024, 2023 | 124 | 2023 |
Instructzero: Efficient instruction optimization for black-box large language models J Chen*, L Chen*, T Goldstein, H Huang, T Zhou ICML 2024, 2023 | 71 | 2023 |
Quantifying uncertainty in answers from any language model and enhancing their trustworthiness J Chen, J Mueller ACL 2024, 2023 | 59* | 2023 |
GOAT: A Global Transformer on Large-scale Graphs K Kong, J Chen, J Kirchenbauer, R Ni, CB Bruss, T Goldstein International Conference on Machine Learning 2023, 2023 | 52 | 2023 |
When do you need chain-of-thought prompting for chatgpt? J Chen, L Chen, H Huang, T Zhou arXiv preprint arXiv:2304.03262, 2023 | 50 | 2023 |
How Many Demonstrations Do You Need for In-context Learning? J Chen, L Chen, C Zhu, T Zhou Empirical Methods in Natural Language Processing 2023, 2023 | 36* | 2023 |
ODIN: Disentangled Reward Mitigates Hacking in RLHF L Chen, C Zhu, J Chen, D Soselia, T Zhou, T Goldstein, H Huang, ... ICML 2024, 2024 | 32 | 2024 |
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. A closer look at distribution shifts and out-of-distribution … TG Ding Mucong, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum ... NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021 | 32* | 2021 |
Why Propagate Alone Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ... Parallel Use of Labels and Features on Graphs. In, 2020 | 32* | 2020 |
Particle-based energetic variational inference Y Wang, J Chen, C Liu, L Kang Statistics and Computing 31, 1-17, 2021 | 28 | 2021 |
Reflection-tuning: Recycling data for better instruction-tuning M Li, L Chen, J Chen, S He, T Zhou NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023 | 26* | 2023 |
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. A closer look at distribution shifts and out-of-distribution … M Ding NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021 | 26 | 2021 |
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement X Wang, J Chen, Z Wang, Y Zhou, Y Zhou, H Yao, T Zhou, T Goldstein, ... Findings of NAACL 2025, 2024 | 23 | 2024 |
Gaussian process assisted active learning of physical laws J Chen, L Kang, G Lin Technometrics 63 (3), 329-342, 2021 | 23 | 2021 |
A closer look at distribution shifts and out-of-distribution generalization on graphs M Ding*, K Kong*, J Chen*, J Kirchenbauer, M Goldblum, D Wipf, ... NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021 | 22 | 2021 |
Automated data curation for robust language model fine-tuning J Chen, J Mueller arXiv preprint arXiv:2403.12776, 2024 | 21 | 2024 |
Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning M Li, L Chen, J Chen, S He, J Gu, T Zhou arXiv preprint arXiv:2402.10110, 2024 | 21 | 2024 |
Does your graph need a confidence boost? Convergent boosted smoothing on graphs with tabular node features J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf International Conference on Learning Representations (ICLR) 2022, 2021 | 16 | 2021 |
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer L Chen, J Chen, H Huang, M Cheng Empirical Methods in Natural Language Processing 2023, 2023 | 10 | 2023 |
Why propagate alone? parallel use of labels and features on graphs Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ... arXiv preprint arXiv:2110.07190, 2021 | 10 | 2021 |