Artikel mit Open-Access-Mandaten - Trishul ChilimbiWeitere Informationen
Verfügbar: 4
Vision-language pre-training with triple contrastive learning
J Yang, J Duan, S Tran, Y Xu, S Chanda, L Chen, B Zeng, T Chilimbi, ...
Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern …, 2022
Mandate: US National Science Foundation, Cancer Prevention Research Institute of …
Inter-disciplinary research challenges in computer systems for the 2020s
A Cohen, X Shen, J Torrellas, J Tuck, Y Zhou, S Adve, I Akturk, S Bagchi, ...
National Science Foundation, 2018
Mandate: US National Science Foundation
Graph-aware language model pre-training on a large graph corpus can help multiple graph applications
H Xie, D Zheng, J Ma, H Zhang, VN Ioannidis, X Song, Q Ping, S Wang, ...
Proceedings of the 29th ACM SIGKDD Conference on Knowledge Discovery and …, 2023
Mandate: US National Institutes of Health
Reaugkd: Retrieval-augmented knowledge distillation for pre-trained language models
J Zhang, A Muhamed, A Anantharaman, G Wang, C Chen, K Zhong, ...
Proceedings of the 61st Annual Meeting of the Association for Computational …, 2023
Mandate: US National Science Foundation
Angaben zur Publikation und Finanzierung werden automatisch von einem Computerprogramm ermittelt