Следене
Junjie Ye
Заглавие
Позовавания
Позовавания
Година
A comprehensive capability analysis of gpt-3 and gpt-3.5 series models
J Ye, X Chen, N Xu, C Zu, Z Shao, S Liu, Y Cui, Z Zhou, C Gong, Y Shen, ...
arXiv preprint arXiv:2303.10420, 2023
349*2023
Instructuie: Multi-task instruction tuning for unified information extraction
X Wang, W Zhou, C Zu, H Xia, T Chen, Y Zhang, R Zheng, J Ye, Q Zhang, ...
arXiv preprint arXiv:2304.08085, 2023
542023
How robust is gpt-3.5 to predecessors? a comprehensive study on language understanding tasks
X Chen, J Ye, C Zu, N Xu, R Zheng, M Peng, J Zhou, T Gui, Q Zhang, ...
arXiv preprint arXiv:2303.00293, 2023
472023
Sentiment-aware multimodal pre-training for multimodal sentiment analysis
J Ye, J Zhou, J Tian, R Wang, J Zhou, T Gui, Q Zhang, X Huang
Knowledge-Based Systems 258, 110021, 2022
442022
Codechameleon: Personalized encryption framework for jailbreaking large language models
H Lv, X Wang, Y Zhang, C Huang, S Dou, J Ye, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2402.16717, 2024
402024
Llm-da: Data augmentation via large language models for few-shot named entity recognition
J Ye, N Xu, Y Wang, J Zhou, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2402.14568, 2024
362024
Causal intervention improves implicit sentiment analysis
S Wang, J Zhou, C Sun, J Ye, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2208.09329, 2022
242022
ToolEyes: fine-grained evaluation for tool learning capabilities of large language models in real-world scenarios
J Ye, G Li, S Gao, C Huang, Y Wu, S Li, X Fan, S Dou, Q Zhang, T Gui, ...
arXiv preprint arXiv:2401.00741, 2024
182024
Poly-Visual-Expert Vision-Language Models
X Fan, T Ji, S Li, S Jin, S Song, J Wang, B Hong, L Chen, G Zheng, ...
First Conference on Language Modeling, 0
14*
Toolsword: Unveiling safety issues of large language models in tool learning across three stages
J Ye, S Li, G Li, C Huang, S Gao, Y Wu, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2402.10753, 2024
122024
Safealigner: Safety alignment against jailbreak attacks via response disparity guidance
C Huang, W Zhao, R Zheng, H Lv, W Zhan, S Dou, S Li, X Wang, E Zhou, ...
arXiv preprint arXiv:2406.18118, 2024
92024
RoTBench: a multi-level benchmark for evaluating the robustness of large language models in tool learning
J Ye, Y Wu, S Gao, C Huang, S Li, G Li, X Fan, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2401.08326, 2024
82024
Linear alignment: A closed-form solution for aligning human preferences without tuning and feedback
S Gao, Q Ge, W Shen, S Dou, J Ye, X Wang, R Zheng, Y Zou, Z Chen, ...
arXiv preprint arXiv:2401.11458, 2024
72024
Llm can achieve self-regulation via hyperparameter aware generation
S Wang, S Li, T Sun, J Fu, Q Cheng, J Ye, J Ye, X Qiu, X Huang
arXiv preprint arXiv:2402.11251, 2024
32024
Beyond boundaries: Learning a universal entity taxonomy across datasets and languages for open named entity recognition
Y Yang, W Zhao, C Huang, J Ye, X Wang, H Zheng, Y Nan, Y Wang, X Xu, ...
arXiv preprint arXiv:2406.11192, 2024
22024
Metarm: Shifted distributions alignment via meta-learning
S Dou, Y Liu, E Zhou, T Li, H Jia, L Xiong, X Zhao, J Ye, R Zheng, T Gui, ...
arXiv preprint arXiv:2405.00438, 2024
22024
RethinkingTMSC: an empirical study for target-oriented multimodal sentiment classification
J Ye, J Zhou, J Tian, R Wang, Q Zhang, T Gui, X Huang
arXiv preprint arXiv:2310.09596, 2023
22023
Improving Discriminative Capability of Reward Models in RLHF Using Contrastive Learning
L Chen, R Zheng, B Wang, S Jin, C Huang, J Ye, Z Zhang, Y Zhou, Z Xi, ...
Proceedings of the 2024 Conference on Empirical Methods in Natural Language …, 2024
12024
Empirical Insights on Fine-Tuning Large Language Models for Question-Answering
J Ye, Y Yang, Q Zhang, T Gui, X Huang, P Wang, Z Shi, J Fan
arXiv preprint arXiv:2409.15825, 2024
12024
Measuring Data Diversity for Instruction Tuning: A Systematic Analysis and A Reliable Metric
Y Yang, Y Nan, J Ye, S Dou, X Wang, S Li, H Lv, T Gui, Q Zhang, X Huang
arXiv preprint arXiv:2502.17184, 2025
2025
Системата не може да изпълни операцията сега. Опитайте отново по-късно.
Статии 1–20