Follow
Simran Arora
Simran Arora
Computer Science, Stanford University
Verified email at cs.stanford.edu - Homepage
Title
Cited by
Cited by
Year
On the opportunities and risks of foundation models
R Bommasani, DA Hudson, E Adeli, R Altman, S Arora, S von Arx, ...
arXiv preprint arXiv:2108.07258, 2021
46612021
DecodingTrust: A Comprehensive Assessment of Trustworthiness in GPT Models.
B Wang, W Chen, H Pei, C Xie, M Kang, C Zhang, C Xu, Z Xiong, R Dutta, ...
Advances in Neural Information Processing Systems (NeurIPS), 2023
3732023
Ask me anything: A simple strategy for prompting language models
S Arora, A Narayan, MF Chen, L Orr, N Guha, K Bhatia, I Chami, F Sala, ...
International Conference on Learning Representations (ICLR), 2022
2212022
Can foundation models wrangle your data?
A Narayan, I Chami, L Orr, S Arora, C Ré
Proceedings of the International Conference on Very Large Databases (PVLDB), 2022
1992022
Contextual embeddings: When are they worth it?
S Arora, A May, J Zhang, C Ré
Association for Computational Linguistics (ACL), 2020
922020
A temperature-mapping molecular sensor for polyurethane-based elastomers
BP Mason, M Whittaker, J Hemmer, S Arora, A Harper, S Alnemrat, ...
Applied Physics Letters (APL), 2016
72*2016
Language models enable simple systems for generating structured views of heterogeneous data lakes
S Arora, B Yang, S Eyuboglu, A Narayan, A Hojel, I Trummer, C Ré
Proceedings of the International Conference on Very Large Databases (PVLDB), 2023
682023
Zoology: Measuring and improving recall in efficient language models
S Arora, S Eyuboglu, A Timalsina, I Johnson, M Poli, J Zou, A Rudra, C Ré
International Conference on Learning Representations (ICLR), 2023
642023
Bootleg: Chasing the tail with self-supervised named entity disambiguation
L Orr, M Leszczynski, S Arora, S Wu, N Guha, X Ling, C Re
Conference on Innovative Data Systems Research (CIDR), 2020
572020
Simple linear attention language models balance the recall-throughput tradeoff
S Arora, S Eyuboglu, M Zhang, A Timalsina, S Alberti, D Zinsley, J Zou, ...
International Conference on Machine Learning (ICML), 2024
492024
Monarch mixer: A simple sub-quadratic gemm-based architecture
D Fu, S Arora, J Grogan, I Johnson, ES Eyuboglu, A Thomas, B Spector, ...
Advances in Neural Information Processing Systems (NeurIPS), 2024
432024
Optimizing declarative graph queries at large scale
Q Zhang, A Acharya, H Chen, S Arora, A Chen, V Liu, BT Loo
Proceedings of the International Conference on Management of Data (SIGMOD), 2019
212019
Reasoning over public and private data in retrieval-based systems
S Arora, P Lewis, A Fan, J Kahn, C Ré
Transactions of the Association for Computational Linguistics (TACL), 2023
18*2023
Control of multiple microrobots with multiscale magnetic field superposition
E Steager, D Wong, J Wang, S Arora, V Kumar
International Conference on Manipulation, Automation and Robotics at Small …, 2017
132017
Relic: Investigating large language model responses using self-consistency
F Cheng, V Zouhar, S Arora, M Sachan, H Strobelt, M El-Assady
Proceedings of the CHI Conference on Human Factors in Computing Systems (CHI), 2024
112024
Benchmarking and building long-context retrieval models with loco and m2-bert
J Saad-Falcon, DY Fu, S Arora, N Guha, C Ré
International Conference on Machine Learning (ICML), 2024
112024
Metadata shaping: A simple approach for knowledge-enhanced language models
S Arora, S Wu, E Liu, C Ré
Findings of the Association for Computational Linguistics (ACL), 2022
112022
Can Foundation Models Help Us Achieve Perfect Secrecy?
S Arora, C Ré
AAAI 2023 Workshop on PPAI, 2022
52022
Just read twice: closing the recall gap for recurrent language models
S Arora, A Timalsina, A Singhal, B Spector, S Eyuboglu, X Zhao, A Rao, ...
arXiv preprint arXiv:2407.05483, 2024
42024
Optimistic Verifiable Training by Controlling Hardware Nondeterminism
M Srivastava, S Arora, D Boneh
Advances in Neural Information Processing Systems (NeurIPS), 2024
42024
The system can't perform the operation now. Try again later.
Articles 1–20