Suivre
Yangguang Li
Yangguang Li
VAST, CUHK
Adresse e-mail validée de link.cuhk.edu.hk
Titre
Citée par
Citée par
Année
Supervision exists everywhere: A data efficient contrastive language-image pre-training paradigm
Y Li, F Liang, L Zhao, Y Cui, W Ouyang, J Shao, F Yu, J Yan
International Conference on Learning Representations (ICLR 2022), 2021
4852021
Triplane meets gaussian splatting: Fast and generalizable single-view 3d reconstruction with transformers
ZX Zou, Z Yu, YC Guo, Y Li, D Liang, YP Cao, SH Zhang
Computer Vision and Pattern Recognition (CVPR 2024), 2023
1302023
Triposr: Fast 3d object reconstruction from a single image
D Tochilkin, D Pankratz, Z Liu, Z Huang, A Letts, Y Li, D Liang, C Laforte, ...
arXiv preprint arXiv:2403.02151, 2024
922024
Text-to-3d with classifier score distillation
X Yu, YC Guo, Y Li, D Liang, SH Zhang, X Qi
International Conference on Learning Representations (ICLR 2024), 2023
622023
SNCSE: Contrastive Learning for Unsupervised Sentence Embedding with Soft Negative Samples
H Wang, Y Li, Z Huang, Y Dou, L Kong, J Shao
https://arxiv.org/abs/2201.05979, 2022
622022
Bevbert: Multimodal map pre-training for language-guided navigation
D An, Y Qi, Y Li, Y Huang, L Wang, T Tan, J Shao
Proceedings of the IEEE/CVF International Conference on Computer Vision …, 2023
482023
Fast-BEV: A Fast and Strong Bird's-Eye View Perception Baseline
Y Li, B Huang, Z Chen, Y Cui, F Liang, M Shen, F Liu, E Xie, L Sheng, ...
IEEE Transactions on Pattern Analysis and Machine Intelligence, 2024
432024
Democratizing Contrastive Language-Image Pre-training: A CLIP Benchmark of Data, Model, and Supervision
Y Cui, L Zhao, F Liang, Y Li, J Shao
First Workshop on Pre-training (ICML 2022), 2022
402022
Intern: A new learning paradigm towards general vision
J Shao, S Chen, Y Li, K Wang, Z Yin, Y He, J Teng, Q Sun, M Gao, J Liu, ...
arXiv preprint arXiv:2111.08687, 2021
372021
Fast-BEV: Towards real-time on-vehicle bird's-eye view perception
B Huang, Y Li, E Xie, F Liang, L Wang, M Shen, F Liu, T Wang, P Luo, ...
arXiv preprint arXiv:2301.07870, 2023
262023
Task-Balanced Distillation for Object Detection
R Tang, Z Liu, Y Li, Y Song, H Liu, Q Wang, J Shao, G Duan, J Tan
Pattern Recognition 2022, 2022
262022
Bevbert: Topo-metric map pre-training for language-guided navigation
D An, Y Qi, Y Li, Y Huang, L Wang, T Tan, J Shao
arXiv preprint arXiv:2212.04385 2 (7), 8, 2022
252022
Unidream: Unifying diffusion priors for relightable text-to-3d generation
Z Liu, Y Li, Y Lin, X Yu, S Peng, YP Cao, X Qi, X Huang, D Liang, ...
European Conference on Computer Vision (ECCV 2024), 2023
232023
Supmae: Supervised masked autoencoders are efficient vision learners
F Liang, Y Li, D Marculescu
Edge Intelligence Workshop at AAAI 2024, 2022
232022
Repre: Improving self-supervised vision transformer with reconstructive pre-training
L Wang, F Liang, Y Li, W Ouyang, H Zhang, J Shao
International Joint Conference on Artificial Intelligence (IJCAI 2022), 2022
232022
EpiDiff: Enhancing Multi-View Synthesis via Localized Epipolar-Constrained Diffusion
Z Huang, H Wen, J Dong, Y Wang, Y Li, X Chen, YP Cao, D Liang, Y Qiao, ...
Computer Vision and Pattern Recognition (CVPR 2024), 2023
202023
Lumina-Next: Making Lumina-T2X Stronger and Faster with Next-DiT
L Zhuo, R Du, H Xiao, Y Li, D Liu, R Huang, W Liu, L Zhao, FY Wang, ...
Advances in Neural Information Processing Systems (NeurIPS 2024), 2024
17*2024
Gvgen: Text-to-3d generation with volumetric representation
X He, J Chen, S Peng, D Huang, Y Li, X Huang, C Yuan, W Ouyang, T He
European Conference on Computer Vision (ECCV 2024), 2024
132024
GraphReader: Building Graph-based Agent to Enhance Long-Context Abilities of Large Language Models
S Li, Y He, H Guo, X Bu, G Bai, J Liu, J Liu, X Qu, Y Li, W Ouyang, W Su, ...
Empirical Methods in Natural Language Processing (EMNLP 2024 Findings), 2024
122024
A Mixture of Surprises for Unsupervised Reinforcement Learning
A Zhao, MG Lin, Y Li, YJ Liu, G Huang
Advances in Neural Information Processing Systems (NeurIPS 2022), 2022
122022
Le système ne peut pas réaliser cette opération maintenant. Veuillez réessayer plus tard.
Articles 1–20