Følg
Zhaowei Li
Zhaowei Li
Verifisert e-postadresse på m.fudan.edu.cn - Startside
Tittel
Sitert av
Sitert av
År
GroundingGPT: Language Enhanced Multi-modal Grounding Model
Z Li, Q Xu, D Zhang, H Song, Y Cai, Q Qi, R Zhou, J Pan, Z Li, VT Vu, ...
ACL 2024, 2024
42*2024
Kimi k1. 5: Scaling reinforcement learning with llms
K Team, A Du, B Gao, B Xing, C Jiang, C Chen, C Li, C Xiao, C Du, C Liao, ...
arXiv preprint arXiv:2501.12599, 2025
142025
SpeechAlign: Aligning Speech Generation to Human Preferences
D Zhang*, Z Li*, S Li, X Zhang, P Wang, Y Zhou, X Qiu
NeurIPS 2024, 2024
142024
Speechagents: Human-communication simulation with multi-modal multi-agent systems
D Zhang, Z Li, P Wang, X Zhang, Y Zhou, X Qiu
arXiv preprint arXiv:2401.03945, 2024
82024
UnifiedMLLM: Enabling Unified Representation for Multi-modal Multi-tasks With Large Language Model
Z Li, W Wang, YQ Cai, X Qi, P Wang, D Zhang, H Song, B Jiang, Z Huang, ...
NAACL 2025 (Findings), 2024
62024
Advancing Fine-Grained Visual Understanding with Multi-Scale Alignment in Multi-Modal Models
W Wang*, Z Li*, Q Xu, L Li, Y Cai, B Jiang, H Song, X Hu, P Wang, L Xiao
arXiv preprint arXiv:2411.09691, 2024
12024
Qcrd: Quality-guided contrastive rationale distillation for large language models
W Wang, Z Li, Q Xu, Y Cai, H Song, Q Qi, R Zhou, Z Huang, T Wang, ...
arXiv preprint arXiv:2405.13014, 2024
12024
Understanding the Role of LLMs in Multimodal Evaluation Benchmarks
B Jiang, L Li, X Li, Z Li, X Feng, L Kong, Q Liu, X Qiu
arXiv preprint arXiv:2410.12329, 2024
2024
Systemet kan ikke utføre handlingen. Prøv på nytt senere.
Artikler 1–8