Obserwuj
Jiuhai Chen
Jiuhai Chen
Zweryfikowany adres z umd.edu
Tytuł
Cytowane przez
Cytowane przez
Rok
From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning
M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao
NAACL 2024, 2023
1342023
Instructzero: Efficient instruction optimization for black-box large language models
J Chen*, L Chen*, T Goldstein, H Huang, T Zhou
ICML 2024, 2023
762023
Quantifying uncertainty in answers from any language model and enhancing their trustworthiness
J Chen, J Mueller
ACL 2024, 2023
62*2023
GOAT: A Global Transformer on Large-scale Graphs
K Kong, J Chen, J Kirchenbauer, R Ni, CB Bruss, T Goldstein
International Conference on Machine Learning 2023, 2023
592023
When do you need Chain-of-Thought Prompting for ChatGPT?
J Chen, L Chen, H Huang, T Zhou
arXiv preprint arXiv:2304.03262, 2023
482023
How Many Demonstrations Do You Need for In-context Learning?
J Chen, L Chen, C Zhu, T Zhou
Empirical Methods in Natural Language Processing 2023, 2023
42*2023
ODIN: Disentangled Reward Mitigates Hacking in RLHF
L Chen, C Zhu, J Chen, D Soselia, T Zhou, T Goldstein, H Huang, ...
ICML 2024, 2024
372024
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. A closer look at distribution shifts and out-of-distribution …
TG Ding Mucong, Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum ...
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
33*2021
Why Propagate Alone
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
Parallel Use of Labels and Features on Graphs. In, 2020
32*2020
Reflection-tuning: Recycling data for better instruction-tuning
M Li, L Chen, J Chen, S He, T Zhou
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
29*2023
Particle-based energetic variational inference
Y Wang, J Chen, C Liu, L Kang
Statistics and Computing 31, 1-17, 2021
282021
Kezhi Kong, Jiuhai Chen, John Kirchenbauer, Micah Goldblum, David Wipf, Furong Huang, and Tom Goldstein. A closer look at distribution shifts and out-of-distribution …
M Ding
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
282021
Selective reflection-tuning: Student-selected data recycling for llm instruction-tuning
M Li, L Chen, J Chen, S He, J Gu, T Zhou
arXiv preprint arXiv:2402.10110, 2024
262024
Enhancing Visual-Language Modality Alignment in Large Vision Language Models via Self-Improvement
X Wang, J Chen, Z Wang, Y Zhou, Y Zhou, H Yao, T Zhou, T Goldstein, ...
Findings of NAACL 2025, 2024
252024
Automated data curation for robust language model fine-tuning
J Chen, J Mueller
arXiv preprint arXiv:2403.12776, 2024
222024
Gaussian process assisted active learning of physical laws
J Chen, L Kang, G Lin
Technometrics 63 (3), 329-342, 2021
222021
A closer look at distribution shifts and out-of-distribution generalization on graphs
M Ding*, K Kong*, J Chen*, J Kirchenbauer, M Goldblum, D Wipf, ...
NeurIPS 2021 Workshop on Distribution Shifts: Connecting Methods and …, 2021
212021
Does your graph need a confidence boost? Convergent boosted smoothing on graphs with tabular node features
J Chen, J Mueller, VN Ioannidis, S Adeshina, Y Wang, T Goldstein, D Wipf
International Conference on Learning Representations (ICLR) 2022, 2021
162021
Why propagate alone? parallel use of labels and features on graphs
Y Wang, J Jin, W Zhang, Y Yang, J Chen, Q Gan, Y Yu, Z Zhang, Z Huang, ...
arXiv preprint arXiv:2110.07190, 2021
122021
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer
L Chen, J Chen, H Huang, M Cheng
Empirical Methods in Natural Language Processing 2023, 2023
102023
Nie można teraz wykonać tej operacji. Spróbuj ponownie później.
Prace 1–20