Follow
Qiying Yu
Title
Cited by
Cited by
Year
Emu series: Generative Multimodal Models are Scalable In-Context Learners
Q Sun*, Q Yu*, Y Cui*, F Zhang*, X Zhang*, Z Luo, Y Wang, H Gao, ...
ICLR 2024 & CVPR 2024, 2023
349*2023
Multimodal Federated Learning via Contrastive Representation Ensemble
Q Yu, Y Liu, Y Wang, K Xu, J Liu
International Conference on Learning Representations (ICLR), 2023, 2023
862023
Emu3: Next-Token Prediction is All You Need
X Wang*, X Zhang*, Z Luo*, Q Sun*, Y Cui*, J Wang*, F Zhang*, Y Wang*, ...
arXiv preprint arXiv:2409.18869, 2024
722024
Capsfusion: Rethinking image-text data at scale
Q Yu*, Q Sun*, X Zhang, Y Cui, F Zhang, Y Cao, X Wang, J Liu
Computer Vision and Pattern Recognition Conference (CVPR) 2024, 2023
432023
EVA-CLIP-18B: Scaling CLIP to 18 Billion Parameters
Q Sun*, J Wang*, Q Yu*, Y Cui, F Zhang, X Zhang, X Wang
arXiv preprint arXiv:2402.04252, 2024
302024
Adversarial Contrastive Learning via Asymmetric InfoNCE
Q Yu, J Lou, X Zhan, Q Li, W Zuo, Y Liu, J Liu
European Conference on Computer Vision (ECCV), 2022, 2022
222022
Multimodal Molecular Pretraining via Modality Blending
Q Yu*, Y Zhang*, Y Ni, S Feng, Y Lan, H Zhou, J Liu
International Conference on Learning Representations (ICLR), 2024, 2023
15*2023
The system can't perform the operation now. Try again later.
Articles 1–7