Mathverse: Does your multi-modal llm truly see the diagrams in visual math problems? R Zhang, D Jiang, Y Zhang, H Lin, Z Guo, P Qiu, A Zhou, P Lu, KW Chang, ... ECCV 2024, 2024 | 105 | 2024 |
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models Y Zhang, H Bai, H Lin, J Zhao, L Hou, CV Cannistraci The Twelfth International Conference on Learning Representations, 0 | 36* | |
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact R Liu, H Bai, H Lin, Y Li, H Gao, Z Xu, L Hou, J Yao, C Yuan Findings of ACL 2024, 2024 | 17 | 2024 |
DuQuant: Distributing Outliers via Dual Transformation Makes Stronger Quantized LLMs H Lin, H Xu, Y Wu, J Cui, Y Zhang, L Mou, L Song, Z Sun, Y Wei NeurIPS 2024 Oral, 2024 | 13* | 2024 |
MoPE-CLIP: Structured Pruning for Efficient Vision-Language Models with Module-wise Pruning Error Metric H Lin, H Bai, Z Liu, L Hou, M Sun, L Song, Y Wei, Z Sun CVPR 2024, 2024 | 13 | 2024 |