Black-box prompt learning for pre-trained language models S Diao, Z Huang, R Xu, X Li, Y Lin, X Zhou, T Zhang arXiv preprint arXiv:2201.08531, 2022 | 95 | 2022 |
Effective Sparsification of Neural Networks with Global Sparsity Constraint X Zhou, W Zhang, H Xu, T Zhang Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2021 | 78 | 2021 |
Sparse invariant risk minimization X Zhou, Y Lin, W Zhang, T Zhang International Conference on Machine Learning, 27222-27244, 2022 | 75 | 2022 |
Model agnostic sample reweighting for out-of-distribution learning X Zhou, Y Lin, R Pi, W Zhang, R Xu, P Cui, T Zhang International Conference on Machine Learning, 27203-27221, 2022 | 56 | 2022 |
Efficient neural network training via forward and backward propagation sparsification X Zhou, W Zhang, Z Chen, S Diao, T Zhang Advances in neural information processing systems 34, 15216-15229, 2021 | 49 | 2021 |
Probabilistic bilevel coreset selection X Zhou, R Pi, W Zhang, Y Lin, Z Chen, T Zhang International Conference on Machine Learning, 27287-27302, 2022 | 34 | 2022 |
A holistic view of label noise transition matrix in deep learning and beyond LIN Yong, R Pi, W Zhang, X Xia, J Gao, X Zhou, T Liu, B Han The Eleventh International Conference on Learning Representations, 2022 | 14 | 2022 |
Model agnostic sample reweighting for out-of-distribution learning X Zhou, Y Lin, R Pi, W Zhang, R Xu, P Cui, T Zhang arXiv preprint arXiv:2301.09819, 2023 | 1 | 2023 |