Pipemare: Asynchronous pipeline parallel dnn training B Yang, J Zhang, J Li, C Ré, C Aberger, C De Sa Proceedings of Machine Learning and Systems 3, 269-296, 2021 | 138 | 2021 |
Mongoose: A learnable lsh framework for efficient neural network training B Chen, Z Liu, B Peng, Z Xu, JL Li, T Dao, Z Song, A Shrivastava, C Re International Conference on Learning Representations, 2020 | 81 | 2020 |
Sambalingo: Teaching large language models new languages Z Csaki, B Li, J Li, Q Xu, P Pawakapan, L Zhang, Y Du, H Zhao, C Hu, ... arXiv preprint arXiv:2404.05829, 2024 | 11 | 2024 |
Constructing domain-specific evaluation sets for llm-as-a-judge R Raju, S Jain, B Li, J Li, U Thakker arXiv preprint arXiv:2408.08808, 2024 | 8 | 2024 |
Climbing the wol: Training for cheaper inference Z Liu, Z Xu, A Ji, J Li, B Chen, A Shrivastava arXiv preprint arXiv:2007.01230, 2020 | 8 | 2020 |
Composition of Experts: A Modular Compound AI System Leveraging Large Language Models S Jain, R Raju, B Li, Z Csaki, J Li, K Liang, G Feng, U Thakkar, A Sampat, ... arXiv preprint arXiv:2412.01868, 2024 | 1 | 2024 |
HALOS: Hashing Large Output Space for Cheap Inference Z Liu, Z Xu, A Ji, J Zhang, J Li, B Chen, A Shrivastava Proceedings of Machine Learning and Systems 4, 110-125, 2022 | 1 | 2022 |