Volgen
Rulin Shao
Rulin Shao
Geverifieerd e-mailadres voor cs.washington.edu - Homepage
Titel
Geciteerd door
Geciteerd door
Jaar
On the adversarial robustness of vision transformers
R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh
Transactions on Machine Learning Research (TMLR), 2022
2352022
How Long Can Context Length of Open-Source LLMs truly Promise?
D Li, R Shao, A Xie, Y Sheng, L Zheng, J Gonzalez, I Stoica, X Ma, ...
NeurIPS 2023 Workshop on Instruction Tuning and Instruction Following, 2023
154*2023
MPCFormer: fast, performant and private Transformer inference with MPC
D Li*, R Shao*, H Wang*, H Guo, EP Xing, H Zhang
ICLR 2023 (Spotlight), 2022
752022
Datacomp-lm: In search of the next generation of training sets for language models
J Li, A Fang, G Smyrnis, M Ivgi, M Jordan, S Gadre, H Bansal, E Guha, ...
arXiv preprint arXiv:2406.11794, 2024
70*2024
Visit-bench: A dynamic benchmark for evaluating instruction-following vision-and-language models
Y Bitton, H Bansal, J Hessel, R Shao, W Zhu, A Awadalla, J Gardner, ...
Advances in Neural Information Processing Systems 36, 2024
59*2024
Stochastic Channel-Based Federated Learning With Neural Network Pruning for Medical Data Privacy Preservation: Model Development and Experimental Validation
R Shao, H He, Z Chen, H Liu, D Liu
Journal of Medical Internet Research (JMIR) Form Res 2020;4(12):e17265, DOI …, 2020
40*2020
Vision-flan: Scaling human-labeled tasks in visual instruction tuning
Z Xu, C Feng, R Shao, T Ashby, Y Shen, D Jin, Y Cheng, Q Wang, ...
arXiv preprint arXiv:2402.11690, 2024
31*2024
Distflashattn: Distributed memory-efficient attention for long-context llms training
D Li, R Shao, A Xie, EP Xing, X Ma, I Stoica, JE Gonzalez, H Zhang
First Conference on Language Modeling, 2024
25*2024
Robust text captchas using adversarial examples
R Shao, Z Shi, J Yi, PY Chen, CJ Hsieh
2022 IEEE International Conference on Big Data (Big Data), 1495-1504, 2022
25*2022
Language models scale reliably with over-training and on downstream tasks
SY Gadre, G Smyrnis, V Shankar, S Gururangan, M Wortsman, R Shao, ...
arXiv preprint arXiv:2403.08540, 2024
242024
Cross-modal Attention Congruence Regularization for Vision-Language Relation Alignment
R Pandey, R Shao, PP Liang, R Salakhutdinov, LP Morency
ACL 2023, 2022
142022
How and When Adversarial Robustness Transfers in Knowledge Distillation?
R Shao, J Yi, PY Chen, CJ Hsieh
ARoW in European Conference on Computer Vision (ECCV 2022), 2021
142021
Scaling Retrieval-Based Language Models with a Trillion-Token Datastore
R Shao, J He, A Asai, W Shi, T Dettmers, S Min, L Zettlemoyer, PWW Koh
Advances in Neural Information Processing Systems 37, 91260-91299, 2025
7*2025
Openscholar: Synthesizing scientific literature with retrieval-augmented lms
A Asai, J He, R Shao, W Shi, A Singh, JC Chang, K Lo, L Soldaini, ...
arXiv preprint arXiv:2411.14199, 2024
22024
RoRA-VLM: Robust Retrieval-Augmented Vision Language Models
J Qi, Z Xu, R Shao, Y Chen, J Di, Y Cheng, Q Wang, L Huang
arXiv preprint arXiv:2410.08876, 2024
12024
ICONS: Influence Consensus for Vision-Language Data Selection
X Wu, M Xia, R Shao, Z Deng, PW Koh, O Russakovsky
arXiv preprint arXiv:2501.00654, 2024
2024
Improving Factuality with Explicit Working Memory
M Chen, Y Li, K Padthe, R Shao, A Sun, L Zettlemoyer, G Gosh, W Yih
arXiv preprint arXiv:2412.18069, 2024
2024
Het systeem kan de bewerking nu niet uitvoeren. Probeer het later opnieuw.
Artikelen 1–17