Прати
Xiang Liu
Xiang Liu
HKUST(GZ)
Верификована је имејл адреса на connect.hkust-gz.edu.cn - Почетна страница
Наслов
Навело
Навело
Година
Active prompting with chain-of-thought for large language models
S Diao*, P Wang*, Y Lin, R Pan, X Liu, T Zhang
ACL 2024, 2023
1972023
LISA: Layerwise Importance Sampling for Memory-Efficient Large Language Model Fine-Tuning
R Pan*, X Liu*, S Diao, R Pi, J Zhang, C Han, T Zhang
NeurIPS 2024, 2024
36*2024
Pruner-Zero: Evolving Symbolic Pruning Metric from scratch for Large Language Models
P Dong*, L Li*, Z Tang, X Liu, X Pan, Q Wang, X Chu
ICML 2024, 2024
302024
Plum: Prompt learning using metaheuristic
R Pan*, S Xing*, S Diao, W Sun, X Liu, K Shum, R Pi, J Zhang, T Zhang
ACL 2024 Findings, 2023
132023
Dissecting the Runtime Performance of the Training, Fine-tuning, and Inference of Large Language Models
L Zhang*, X Liu*, Z Li, X Pan, P Dong, R Fan, R Guo, X Wang, Q Luo, ...
arXiv preprint arXiv:2311.03687, 2023
72023
ParZC: Parametric Zero-Cost Proxies for Efficient NAS
P Dong*, L Li*, X Pan, Z Wei, X Liu, Q Wang, X Chu
AAAI 2024 Oral, 2024
52024
LongGenBench: Long-context Generation Benchmark
X Liu, P Dong, X Hu, X Chu
EMNLP 2024 Findings, 2024
42024
Should We Really Edit Language Models? On the Evaluation of Edited Language Models
Q Li*, X Liu*, Z Tang, P Dong, Z Li, X Pan, X Chu
NeurIPS 2024, 2024
32024
Discovering Sparsity Allocation for Layer-wise Pruning of Large Language Models
L Li*, P Dong*, Z Tang, X Liu, Q Wang, W Luo, W Xue, Q Liu, X Chu, ...
NeurIPS 2024, 2024
32024
LPZero: Language Model Zero-cost Proxy Search from Zero
P Dong, L Li, X Liu, Z Tang, X Liu, Q Wang, X Chu
EMNLP 2024 Findings, 2024
12024
3D Question Answering for City Scene Understanding
P Sun*, Y Song*, X Liu, X Yang, Q Wang, T Li, Y Yang, X Chu
ACM MM 2024, 2024
12024
Mediator: Memory-efficient LLM Merging with Less Parameter Conflicts and Uncertainty Based Routing
K Lai, Z Tang, X Pan, P Dong, X Liu, H Chen, L Shen, B Li, X Chu
arXiv preprint arXiv:2502.04411, 2025
2025
Can LLMs Maintain Fundamental Abilities under KV Cache Compression?
X Liu, Z Tang, H Chen, P Dong, Z Li, X Zhou, B Li, X Hu, X Chu
arXiv preprint arXiv:2502.01941, 2025
2025
ChunkKV: Semantic-Preserving KV Cache Compression for Efficient Long-Context LLM Inference
X Liu, Z Tang, P Dong, Z Li, B Li, X Hu, X Chu
arXiv preprint arXiv:2502.00299, 2025
2025
Систем тренутно не може да изврши ову радњу. Пробајте поново касније.
Чланци 1–14