Volgen
Zirui Wang
Zirui Wang
xAI
Geen geverifieerd e-mailadres - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
Palm 2 technical report
R Anil, AM Dai, O Firat, M Johnson, D Lepikhin, A Passos, S Shakeri, ...
arXiv preprint arXiv:2305.10403, 2023
15632023
Coca: Contrastive captioners are image-text foundation models
J Yu, Z Wang, V Vasudevan, L Yeung, M Seyedhosseini, Y Wu
Transactions on Machine Learning Research, 2022
14412022
Beyond the imitation game: Quantifying and extrapolating the capabilities of language models
A Srivastava, A Rastogi, A Rao, AAM Shoeb, A Abid, A Fisch, AR Brown, ...
arXiv preprint arXiv:2206.04615, 2022
13002022
Scaling autoregressive models for content-rich text-to-image generation
J Yu, Y Xu, JY Koh, T Luong, G Baid, Z Wang, V Vasudevan, A Ku, Y Yang, ...
Transactions on Machine Learning Research, 2022
10992022
Simvlm: Simple visual language model pretraining with weak supervision
Z Wang, J Yu, AW Yu, Z Dai, Y Tsvetkov, Y Cao
ICLR 2022, 2022
8692022
Characterizing and avoiding negative transfer
Z Wang, Z Dai, B Póczos, J Carbonell
CVPR 2019, 2019
6082019
Ferret: Refer and ground anything anywhere at any granularity
H You, H Zhang, Z Gan, X Du, B Zhang, Z Wang, L Cao, SF Chang, ...
arXiv preprint arXiv:2310.07704, 2023
2422023
Gradient Vaccine: Investigating and Improving Multi-task Optimization in Massively Multilingual Models
Z Wang, Y Tsvetkov, O Firat, Y Cao
ICLR 2021, 2021
1982021
MM1: methods, analysis and insights from multimodal LLM pre-training
B McKinzie, Z Gan, JP Fauconnier, S Dodge, B Zhang, P Dufter, D Shah, ...
European Conference on Computer Vision, 304-323, 2024
1802024
On Negative Interference in Multilingual Models: Findings and A Meta-Learning Treatment
Z Wang, ZC Lipton, Y Tsvetkov
EMNLP 2020, 2020
1382020
Towards zero-label language learning
Z Wang, AW Yu, O Firat, Y Cao
arXiv preprint arXiv:2109.09193, 2021
902021
VideoCoCa: Video-text modeling with zero-shot transfer from contrastive captioners
S Yan, T Zhu, Z Wang, Y Cao, M Zhang, S Ghosh, Y Wu, J Yu
arXiv preprint arXiv:2212.04979, 2022
862022
Reveal: Retrieval-augmented visual-language pre-training with multi-source multimodal knowledge memory
Z Hu, A Iscen, C Sun, Z Wang, KW Chang, Y Sun, C Schmid, DA Ross, ...
Proceedings of the IEEE/CVF conference on computer vision and pattern …, 2023
822023
Cross-lingual alignment vs joint training: A comparative study and a simple unified framework
Z Wang, J Xie, R Xu, Y Yang, G Neubig, J Carbonell
ICLR 2020, 2020
812020
Efficient Meta Lifelong-Learning with Limited Memory
Z Wang, SV Mehta, B Póczos, J Carbonell
EMNLP 2020, 2020
642020
Coca: Contrastive captioners are image-text foundation models. arXiv 2022
J Yu, Z Wang, V Vasudevan, L Yeung, M Seyedhosseini, Y Wu
arXiv preprint arXiv:2205.01917, 0
61
Scaling autoregressive models for content-rich text-to-image generation. arXiv 2022
J Yu, Y Xu, JY Koh, T Luong, G Baid, Z Wang, V Vasudevan, A Ku, Y Yang, ...
arXiv preprint arXiv:2206.10789, 2018
402018
Medblip: Bootstrapping language-image pre-training from 3d medical images and texts
Q Chen, Y Hong
Proceedings of the Asian Conference on Computer Vision, 2404-2420, 2024
342024
Apple intelligence foundation language models
T Gunter, Z Wang, C Wang, R Pang, A Narayanan, A Zhang, B Zhang, ...
arXiv preprint arXiv:2407.21075, 2024
292024
Theoretical guarantees of transfer learning
Z Wang
arXiv preprint arXiv:1810.05986, 2018
172018
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–20