Följ
Lichang Chen
Lichang Chen
University of Maryland; Google Deepmind
Verifierad e-postadress på cs.umd.edu - Startsida
Titel
Citeras av
Citeras av
År
Hallusionbench: an advanced diagnostic suite for entangled language hallucination and visual illusion in large vision-language models
F Liu, T Guan, Z Li, L Chen, Y Yacoob, D Manocha, T Zhou
CVPR 2024, 2023
279*2023
AlpaGasus: Training a Better Alpaca with Fewer Data
L Chen, S Li, J Yan, H Wang, K Gunaratna, V Yadav, Z Tang, V Srinivasan, ...
ICLR 2024, 2024
227*2024
From quantity to quality: Boosting llm performance with self-guided data selection for instruction tuning
M Li, Y Zhang, Z Li, J Chen, L Chen, N Cheng, J Wang, T Zhou, J Xiao
NAACL 2024, 2023
1292023
Unbiased watermark for large language models
Z Hu, L Chen, X Wu, Y Wu, H Zhang, H Huang
ICLR 2024, 2023
1002023
Backdooring instruction-tuned large language models with virtual prompt injection
J Yan, V Yadav, S Li, L Chen, Z Tang, H Wang, V Srinivasan, X Ren, H Jin
NAACL 2024, 2024
94*2024
Instructzero: Efficient instruction optimization for black-box large language models
L Chen, J Chen, T Goldstein, H Huang, T Zhou
ICML 2024, 2023
752023
Alpacare: Instruction-tuned large language models for medical application
X Zhang, C Tian, X Yang, L Chen, Z Li, LR Petzold
arXiv preprint arXiv:2310.14558, 2023
482023
When do you need Chain-of-Thought Prompting for ChatGPT?
J Chen, L Chen, H Huang, T Zhou
arXiv preprint arXiv:2304.03262, 2023
482023
Reflection-tuning: Recycling data for better instruction-tuning
M Li, L Chen, J Chen, S He, T Zhou
ACL 2024, 2024
43*2024
How Many Demonstrations Do You Need for In-context Learning?
J Chen, L Chen, C Zhu, T Zhou
EMNLP 2023, 2023
42*2023
ODIN: Disentangled Reward Mitigates Hacking in RLHF
L Chen, C Zhu, D Soselia, J Chen, T Zhou, T Goldstein, H Huang, ...
ICML 2024, 2024
372024
Prompting language-informed distribution for compositional zero-shot learning
W Bao, L Chen, H Huang, Y Kong
ECCV 2024, 2023
212023
Graph edit distance reward: Learning to edit scene graph
L Chen, G Lin, S Wang, Q Wu
ECCV 2020, 2020
212020
Backdoor learning on sequence to sequence models
L Chen, M Cheng, H Huang
arXiv preprint 2022, 2022
192022
Task-aware sampling layer for point-wise analysis
Y Lin, L Chen, H Huang, C Ma, X Han, S Cui
IEEE Transactions on Visualization and Computer Graphics (TVCG), 2021
13*2021
Your Vision-Language Model Itself Is a Strong Filter: Towards High-Quality Instruction Tuning with Data Selection
R Chen, Y Wu, L Chen, G Liu, Q He, T Xiong, C Liu, J Guo, H Huang
ACL 2024 (Findings), 2024
102024
PTP: Boosting Stability and Performance of Prompt Tuning with Perturbation-Based Regularizer
L Chen, H Huang, M Cheng
EMNLP 2023, 2022
102022
Can llms speak for diverse people? tuning llms via debate to generate controllable controversial statements
M Li, J Chen, L Chen, T Zhou
ACL 2024 (Findings), 2024
92024
Unveiling the Impact of Coding Data Instruction Fine-Tuning on Large Language Models Reasoning
X Zhang, ZZ Chen, X Ye, X Yang, L Chen, WY Wang, LR Petzold
AAAI 2025, 2025
62025
From lists to emojis: How format bias affects model alignment
L Chen, X Zhang, W Xiong, T Zhou, H Huang, T Zhang
arXiv preprint arXiv:2409.11704, 2024
6*2024
Systemet kan inte utföra åtgärden just nu. Försök igen senare.
Artiklar 1–20