Folgen
Haoli Bai
Haoli Bai
Noah's Ark Lab, Huawei
Bestätigte E-Mail-Adresse bei huawei.com - Startseite
Titel
Zitiert von
Zitiert von
Jahr
Binarybert: Pushing the limit of bert quantization
H Bai, W Zhang, L Hou, L Shang, J Jin, X Jiang, Q Liu, M Lyu, I King
59th Annual Meeting of the Association for Computational Linguistics (ACL …, 2021
2412021
Few shot network compression via cross distillation
H Bai, J Wu, I King, M Lyu
Proceedings of the AAAI Conference on Artificial Intelligence, 3203-3210, 2020
692020
Towards efficient post-training quantization of pre-trained language models
H Bai, L Hou, L Shang, X Jiang, I King, MR Lyu
Advances in Neural Information Processing Systems, 2022
612022
Neural Relational Topic Models for Scientific Article Analysis
H Bai, Z Chen, MR Lyu, I King, Z Xu
Proceedings of the 27th ACM International Conference on Information and …, 2018
592018
DART: Domain-adversarial residual-transfer networks for unsupervised cross-domain image classification
X Fang, H Bai, Z Guo, B Shen, S Hoi, Z Xu
Neural Networks 127, 182-192, 2020
532020
Structured pruning of recurrent neural networks through neuron selection
L Wen, X Zhang, H Bai, Z Xu
Neural Networks 123, 134-141, 2020
502020
Plug-and-Play: An Efficient Post-training Pruning Method for Large Language Models
Y Zhang, H Bai, H Lin, J Zhao, L Hou, CV Cannistraci
The Twelfth International Conference on Learning Representations, 2024
362024
Rtn: Reparameterized ternary network
Y Li, X Dong, SQ Zhang, H Bai, Y Chen, W Wang
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 4780-4787, 2020
342020
Structured Inference for Recurrent Hidden Semi-markov Model.
H Liu, L He, H Bai, B Dai, K Bai, Z Xu
IJCAI, 2447-2453, 2018
342018
Structured pruning for efficient generative pre-trained language models
C Tao, L Hou, H Bai, J Wei, X Jiang, Q Liu, P Luo, N Wong
Findings of the Association for Computational Linguistics: ACL 2023, 10880-10895, 2023
332023
M-nas: Meta neural architecture search
J Wang, J Wu, H Bai, J Cheng
Proceedings of the AAAI Conference on Artificial Intelligence 34 (04), 6186-6193, 2020
322020
Revisiting Parameter Sharing for Automatic Neural Channel Number Search
J Wang*, H Bai*, J Wu, X Shi, J Huang, I King, M Lyu, J Cheng
Advances in Neural Information Processing Systems 33, 2020
312020
Pocketflow: An automated framework for compressing and accelerating deep neural networks
J Wu, Y Zhang, H Bai, H Zhong, J Hou, W Liu, W Huang, J Huang
292018
Efficient bitwidth search for practical mixed precision neural network
Y Li, W Wang, H Bai, R Gong, X Dong
arXiv preprint arXiv:2003.07577 3, 2020
232020
Dynamically pruning segformer for efficient semantic segmentation
H Bai, H Mao, D Nair
ICASSP 2022, 2021
212021
Bayesian automatic model compression
J Wang, H Bai, J Wu, J Cheng
IEEE Journal of Selected Topics in Signal Processing 14 (4), 727-736, 2020
212020
IntactKV: Improving Large Language Model Quantization by Keeping Pivot Tokens Intact
R Liu, H Bai, H Lin, Y Li, H Gao, Z Xu, L Hou, J Yao, C Yuan
arXiv preprint arXiv:2403.01241, 2024
172024
Variational random function model for network modeling
Z Xu, B Liu, S Zhe, H Bai, Z Wang, J Neville
IEEE transactions on neural networks and learning systems 30 (1), 318-324, 2018
152018
Translider: Transfer ensemble learning from exploitation to exploration
K Zhong, Y Wei, C Yuan, H Bai, J Huang
Proceedings of the 26th ACM SIGKDD International Conference on Knowledge …, 2020
142020
Mope-clip: Structured pruning for efficient vision-language models with module-wise pruning error metric
H Lin, H Bai, Z Liu, L Hou, M Sun, L Song, Y Wei, Z Sun
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2024
132024
Das System kann den Vorgang jetzt nicht ausführen. Versuchen Sie es später erneut.
Artikel 1–20