Segui
Shibo Jie
Titolo
Citata da
Citata da
Anno
Convolutional bypasses are better vision transformer adapters
S Jie, ZH Deng, S Chen, Z Jin
arXiv preprint arXiv:2207.07039, 2022
1492022
Fact: Factor-tuning for lightweight adaptation on vision transformer
S Jie, ZH Deng
AAAI conference on artificial intelligence (AAAI) 37 (1), 1060-1068, 2023
1082023
Revisiting the parameter efficiency of adapters from the perspective of precision redundancy
S Jie, H Wang, ZH Deng
IEEE/CVF International Conference on Computer Vision (ICCV), 17217-17226, 2023
322023
Alleviating representational shift for continual fine-tuning
S Jie, ZH Deng, Z Li
IEEE/CVF Conference on Computer Vision and Pattern Recognition Workshops …, 2022
152022
Memory-Space Visual Prompting for Efficient Vision-Language Fine-Tuning
S Jie, Y Tang, N Ding, ZH Deng, K Han, Y Wang
International Conference on Machine Learning (ICML), 2024
82024
Detachedly Learn a Classifier for Class-Incremental Learning
Z Li, S Jie, ZH Deng
arXiv preprint arXiv:2302.11730, 2023
22023
Token Compensator: Altering Inference Cost of Vision Transformer without Re-Tuning
S Jie, Y Tang, J Guo, ZH Deng, K Han, Y Wang
European Conference on Computer Vision (ECCV), 2024
12024
Focus your attention when few-shot classification
H Wang, S Jie, Z Deng
Advances in Neural Information Processing Systems (NeurIPS) 36, 2024
2024
Il sistema al momento non può eseguire l'operazione. Riprova più tardi.
Articoli 1–8